accession_id
stringlengths
9
11
pmid
stringlengths
1
8
introduction
stringlengths
0
134k
methods
stringlengths
0
208k
results
stringlengths
0
357k
discussion
stringlengths
0
357k
conclusion
stringlengths
0
58.3k
front
stringlengths
0
30.9k
body
stringlengths
0
573k
back
stringlengths
0
126k
license
stringclasses
4 values
retracted
stringclasses
2 values
last_updated
stringlengths
19
19
citation
stringlengths
14
94
package_file
stringlengths
0
35
PMC10788655
0
BACKGROUND Aortic stenosis is present in over 2% of adults over the age of 65, making it the most common valvular heart disease in the developed world. 1 A further 25% of the population aged 65 or more have aortic sclerosis. 2 For this reason, together with the ageing population, the impact of aortic valve disease on healthcare resources is expected to increase. Aortic sclerosis is associated with a 50% increase in cardiovascular mortality 1 and hence optimizing the management of aortic valve disease is considered a priority for the cardiovascular field. The earliest experimental approaches towards treating valvular disease include the digital dilation of a stenosed aortic valve, performed by Theodore Tuffier in 1912. Subsequently, open‐heart surgery was the only option available to patients with significant valvular disorders for the best part of a century. 3 Procedures evolved tremendously after the introduction of cardiopulmonary bypass, 4 but surgical valve replacement has always carried a risk of mortality. This risk is higher in patients with comorbidities such as renal insufficiency or vascular disease, of which many have calcific aortic stenosis. 5 As a result, the 2003 Euro Heart Survey found that over 30% of patients with a severe valvular disease did not receive intervention, primarily due to comorbidities. 5 A significant breakthrough was achieved in 2002 when Alain Cribier performed the first transcatheter aortic valve replacement (TAVR) by taking an antegrade approach from the right femoral vein. 6 This initial procedure resulted in severe noncardiac complications such as pulmonary embolism, lower limb ischaemia and subsequent death, 6 but it has rapidly grown in popularity over the last two decades. 7 , 8 In 2019, TAVR was performed more frequently in the United States than surgical aortic valve replacement (SAVR). 9 In recent years, the clinical uses of TAVR have been expanded due to advances in valve‐in‐valve TAVR technology, serving as a feasible alternative to redo surgery. 10 Patients necessitating another valve procedure are often associated with higher surgical risks and may be unsuitable for further surgery due to adhesions, 11 which highlights the importance of alternatives such as the valve‐in‐valve TAVR. Today, TAVR has become an established procedure which serves as a viable alternative when patients are unsuitable for surgical replacement, 12 and it is sometimes preferred for patients with a lower surgical risk due to its less invasive nature. 13 , 14 We provide an overview of the TAVR procedure, the pivotal role of imaging in its peri‐procedural management, and the key complications associated with the intervention. We also compare TAVR with the conventional surgical procedure and explore what the future holds for TAVR and transcatheter approaches for treating diseases involving other valves.
CONCLUSION Since its inception in 2002, the TAVR procedure has made striking advances. Minimally invasive procedures have become the dominant forms of treatment for a variety of conditions. Challenges relating to durability, complications and cost still need to be overcome, but TAVR will continue to generate excitement in the field of interventional cardiology for years to come.
Abstract Transcatheter aortic valve replacement (TAVR) has emerged as a ground‐breaking, minimally invasive alternative to traditional open‐heart surgery, primarily designed for elderly patients initially considered unsuitable for surgical intervention due to severe aortic stenosis. As a result of successful large‐scale trials, TAVR is now being routinely applied to a broader spectrum of patients. In deciding between TAVR and surgical aortic valve replacement, clinicians evaluate various factors, including patient suitability and anatomy through preprocedural imaging, which guides prosthetic valve sizing and access site selection. Patient surgical risk is a pivotal consideration, with a multidisciplinary team making the ultimate decision in the patient's best interest. Periprocedural imaging aids real‐time visualization but is influenced by anaesthesia choices. A comprehensive postprocedural assessment is critical due to potential TAVR‐related complications. Numerous trials have demonstrated that TAVR matches or surpasses surgery for patients with diverse surgical risk profiles, ranging from extreme to low risk. However, long‐term follow‐up data, particularly in low‐risk cases, remains limited, and the applicability of published results to younger patients is uncertain. This review delves into key TAVR studies, pinpointing areas for potential improvement while delving into the future of this innovative procedure. Furthermore, it explores the expanding role of TAVR technology in addressing other heart valve replacement procedures. In this review, we provide an updated overview of the TAVR procedure, the main aspects of preprocedural patient assessment, and the key complications of the procedure. We also summarize the current evidence comparing TAVR and SAVR, before exploring the potential future directions for TAVR. Created with BioRender. Srinivasan A , Wong F , Wang B . Transcatheter aortic valve replacement: past, present, and future . Clin Cardiol . 2024 ; 47 : e24209 . 10.1002/clc.24209
Abbreviations American College of Cardiology/American Heart Association cerebral embolic protection cerebrovascular event extreme risk European Society of Cardiology/European Association for Cardio‐Thoracic Surgery European System for Cardiac Operative Risk Evaluation II high risk low risk multidetector computed tomography nordic aortic valve intervention placement of aortic transcatheter valve permanent pacemaker sheath‐to‐femoral‐artery ratio SMall Annuli Randomized To Evolut or SAPIEN compariSon of secOnd‐generation seLf‐expandable versus balloon‐expandable Valves and gEneral versus local anaesthesia in Transcatheter Aortic Valve Implantation Society of Thoracic Surgeons Predicted Risk of Mortality surgical replacement and transcatheter aortic valve implantation transcatheter aortic valve replacement transcatheter mitral valve replacement transoesophageal echocardiography transthoracic echocardiography THE TAVR PROCEDURE Since the first TAVR in 2002, the procedure has evolved significantly in its technique, the access points, and the choice of anaesthesia. Cribier performed the first TAVR on a 57‐year‐old male with significant co‐morbidities who was placed under mild, conscious sedation and local anaesthesia. 6 After this first attempt, further measures were introduced including general anaesthesia and intraprocedural transoesophageal echocardiography (TOE). The drawbacks include a requirement for endotracheal intubation and ventilation, haemodynamic instability and longer hospital stays. 15 Consequently, the “minimalist” combination of local anaesthesia and conscious sedation demonstrated by Cribier has since been revisited, 16 and there has been an increased focus on simplifying the TAVR procedure. Anaesthetic approaches The anaesthetic approaches to TAVR have been compared mostly through nonrandomized trials and registry data. In 2008, a case series by Behan et al. 17 suggested the potential benefits of TAVR with sedation, which include a shorter stay in high‐dependency areas and fewer complications. Conversely, in the same year, Ree et al. 18 reported adverse experiences when using sedation alone after four patients needed unplanned vascular surgery to repair the TAVR access site, which necessitated conversion to general anaesthesia. More recent studies have suggested that local anaesthesia with conscious sedation is a feasible approach with a 2018 registry analysis by Eskandari et al. 19 concluding that procedural outcome, 30‐day and 1‐year mortality are not affected by the anaesthetic approach. It has become apparent that general anaesthesia is associated with a longer procedure duration and hospital stay. 19 From a managerial perspective, this may also have cost implications on patient care. 20 Several centers have therefore adopted a minimalist approach for patients necessitating TAVR. 21 Access sites The femoral access remains the preferred approach. 22 and technological improvements have reduced the sheath sizes from 24–25 Fr down to 14–16 Fr, 23 thus decreasing bleeding complications. 24 The transfemoral approach is the least invasive since it is usually performed completely percutaneously, and therefore it is the most permissive to local anaesthesia and sedation. 25 However, alternative nonfemoral access sites are sometimes used, as seen in 28.8% of cases reported by the UK TAVI registry. 25 These routes are typically chosen when femoral access is limited by factors such as tortuous iliofemoral vasculature and obstructive peripheral vascular disease. 26 One of the most established nonfemoral alternatives is the transapical route, 25 , 27 which involves performing a left mini‐thoracotomy before puncturing the apex to deliver the valve system. 26 This approach has been associated with fewer vascular complications than the transfemoral route, but a higher rate of all‐cause mortality. 28 Less common methods include the direct aortic access. 29 and the transaxillary/subclavian approaches, 30 which were both used in approximately 5% of cases in the UK TAVI registry. 25 These methods also require surgery to expose and access the artery, although a percutaneous TAVR using the trans‐axillary route has been described in the literature with no major complications relating to the access site. 31 Figure 1 depicts a visual representation of the various access routes available for TAVR. Valve systems There are multiple heart valve systems used for the TAVR procedure. First, Medtronic has developed a series of Evolut heart valves, which are composed of porcine tissue attached to a self‐expanding nickel titanium frame. 32 A delivery catheter is used to insert and release these artificial valves into the body, before they self‐expand and attach to the damaged heart valve. 32 The Sapien 3 and Sapien 3 Ultra valves designed by Edwards Lifesciences are made of bovine tissue and attached to a cobalt‐chromium frame. 33 However, these valves are delivered using a balloon catheter and expanded by the balloon, before anchoring to the damaged aortic valve. 33 Heart valve systems have evolved significantly over the past decade. First‐generation devices, such as the Edwards Sapien and the Medtronic CoreValve, demonstrated efficacy in early trials. Unfortunately, they were also linked with issues such as paravalvular aortic regurgitation, vascular complications, conduction disturbances and stroke, thus leading to poorer prognoses. 34 , 35 , 36 However, technological advances have focused on reducing such complications; for example, smaller delivery sheaths are used to reduce the degree of vascular trauma and bleeding events, whilst the addition of outer skirts has helped to prevent paravalvular regurgitation (Figure 2 ). 34 Currently, the literature supports the use of newer generation versions of both self‐expandable. 37 and balloon‐expandable. 38 valves, but there is limited data comparing these two options. A small trial of 241 patients receiving TAVR with first‐generation valve systems found that balloon‐expandable devices had a higher success rate. 39 Nevertheless, the more recent compariSon of secOnd‐generation seLf‐expandable versus balloon‐expandable Valves and gEneral versus local anaesthesia in Transcatheter Aortic Valve Implantation (SOLVE‐TAVI) trial showed that newer generation self‐expandable and balloon‐expandable valves were equivalent as per the composite endpoint of all‐cause mortality, stroke, permanent pacemaker (PPM) implantation and paravalvular leakage. 40 It should be noted that this trial only consisted of 447 patients, 40 and thus larger studies such as the ongoing SMall Annuli Randomized to Evolut or SAPIEN Trial (SMART) 41 are required to compare individual clinical endpoints and draw firmer conclusions. The valve systems produced by Medtronic and Edwards Lifesciences are the most established in the TAVR market, but new competitors are emerging. Abbott previously gained Food and Drug Administration (FDA) approval for the self‐expanding Portico TAVR system, 42 before developing the improved Navitor system that was approved by the FDA in 2023. 43 Similarly, Boston Scientific initially developed the self‐expanding ACURATE neo device, 44 but this was found to be inferior to the existing Sapien 3 45 and CoreValve Evolut. 46 systems. However, the subsequent ACURATE neo2 device demonstrated significant improvements 44 , 47 and received a CE mark in 2020, although it is still awaiting FDA approval. 48 It remains to be seen whether these novel devices will become well‐established options for TAVR in the future. PREPROCEDURAL EVALUATION OF PATIENT SUITABILITY FOR TAVR A comprehensive assessment of patient suitability for valve replacement is conducted by a multidisciplinary heart valve team. Mechanical valves can only be implanted surgically and therefore a transcatheter approach is not appropriate for patients who are deemed more suitable for mechanical valve replacement. 49 The surgical risk profile of a patient can be calculated using scoring systems, such as the Society of Thoracic Surgeons Predicted Risk of Mortality (STS‐PROM) score. 50 or the European System for Cardiac Operative Risk Evaluation II (EuroSCORE II). 51 Initial TAVR trials were conducted on patients who were inoperable or had a high surgical risk profile (often defined as an STS‐PROM/EuroSCORE II >8%–10%) 50 , 51 to explore the viability of TAVR as an alternative to surgery. Together with improvement in technology and center experience, its feasibility in lower and intermediate risk patient subgroups have since been explored and supported. 52 , 53 Age is important when evaluating the suitability of patients for TAVR; patients from older age groups may benefit from the procedure since it is less invasive compared to conventional open‐heart surgery. 13 , 14 Furthermore, a differential effect of patient sex on TAVR outcomes has been reported within the literature. 54 Females have been reported to be more at risk of vascular complications compared to men; this is likely due to more tortuous iliofemoral vasculature and smaller luminal diameters. 54 , 55 Large‐scale observational studies have shown that female TAVR patients are often frailer and present with higher STS‐PROM risk scores at baseline. 54 Although patient frailty is not included in the calculation of the STS‐PROM and EuroSCORE II scores, it serves as an independent predictor of mortality and complications during postprocedural recovery. 56 , 57 , 58 The two main guidelines for choosing between SAVR and TAVR are the 2020 American College of Cardiology (ACC)/American Heart Association (AHA) 13 and the 2021 European Society of Cardiology (ESC)/European Association for Cardio‐Thoracic Surgery (EACTS) 14 guidelines. Both guidelines recommend SAVR for younger patients who are suitable for surgery whilst preferring TAVR for older inoperable patients, but the defined age ranges differ as shown by Table 1 . There is a further subset of patients who are suitable for either TAVR or SAVR, and thus the assessment of individual characteristics and the balance between valve durability and life expectancy is crucial. 13 , 14 After deciding whether a patient is suitable for valve replacement and choosing between SAVR and TAVR, preprocedural assessment is also required to determine how the TAVR should be carried out and whether any concomitant procedures are necessary. Sizing of the prosthetic Unlike SAVR, where sizers could be used under direct visualization to determine the optimal fit of prosthesis, imaging is needed before TAVR to assess the aortic annulus diameter. 59 An undersized prosthesis increases the risk of paravalvular regurgitation, which is seen in more than half of patients post‐TAVR. 60 In contrary, an oversized prosthesis could impinge on nearby conductive tissue and predispose patients to arrhythmias. 61 3‐dimensional imaging modalities, such as multidetector computed tomography (MDCT), have over time superseded 2‐dimensional echocardiography to become the gold standard for assessing the aortic annulus size. 13 , 14 This has revolutionized the preprocedural workup for TAVR since 2‐dimensional echocardiography often underestimated the aortic annulus diameter, leading to prosthesis undersizing 62 , 63 and increased risks of paravalvular regurgitation. Yet, despite its clear advantages, MDCT requires administration of a contrast medium and therefore may not be suitable for patients with renal impairment or contrast allergies. 13 , 14 In such cases, 3‐dimensional TOE could be used as a viable alternative, with recent evidence showing that it correlates well with MDCT when assessing the aortic annulus parameters. 64 , 65 Valve and leaflet considerations Assessment of valve calcification is an integral aspect of the preprocedural workup. Although some aortic valve calcification is beneficial for keeping the prosthetic valve in place, excessive calcification can impair prosthetic valve apposition and risk paravalvular regurgitation. 66 Pushing the prosthetic through the diseased aortic valve can also displace any calcified deposits surrounding the leaflets; this can lead to embolisation and impair long term patient outcomes. 66 Indeed, some authors have shown a correlation between the degree of calcification, based on the Agatston calcium score, and embolization risk. 66 , 67 In some scenarios, valve calcification can also lead to the fusion of native valve leaflets. This could hinder prosthetic valve expansion during valve deployment, which may further precipitate paravalvular regurgitation. 66 , 67 Coronary artery disease Imaging enables the screening of coronary artery disease, which is commonly seen in conjunction with aortic stenosis. 68 Some studies have reported its prevalence to exceed 60% in patients undergoing SAVR and around 50% in patients undergoing TAVR. 68 Assessment of coronary artery disease plays an important role in the choice of management between surgery and TAVR. Open heart surgery may be preferred over TAVR if patients require additional cardiac surgeries, such as a coronary artery bypass graft, since surgery allows multiple procedures to be conducted in the same sitting. 69 Although there is evidence for TAVR with concomitant coronary intervention as a potential alternative in patients who are suitable, more research is needed for firm conclusions to be made. 13 , 14 , 70 Coronary artery disease is traditionally evaluated using invasive coronary angiography. Studies have consistently highlighted excellent negative predictive values for CT coronary angiography in ruling out significant CAD. 71 , 72 , 73 Nonetheless, one drawback of CT coronary angiography is that its diagnostic performance could be hindered by coronary calcification, which is common in TAVR patients. 74 CT coronary angiography has received considerable attention over time since it demands lower contrast volume, has good diagnostic accuracy and is less invasive compared to conventional angiography. 73 , 75 This has important clinical and cost implications because minimizing the invasiveness of preprocedural evaluation could shorten the duration of in‐hospital stay following TAVR. 73 POSTPROCEDURAL IMAGING AND COMPLICATIONS TTE remains as the modality of choice for postprocedural follow‐up, owing to its good diagnostic accuracy, ability to monitor haemodynamic status, absence of ionizing radiation and its extensive availability. 76 Although minor discrepancies exist, guidelines in general recommend that patients should receive TTE at the following time points: (1) before hospital discharge, (2) 30 days post‐TAVR, (3) 1‐year post‐TAVR, and (4) yearly thereafter. 13 , 77 , 78 TTE can be used to monitor postprocedural complications and assess the haemodynamic performance of the prosthetic. 14 In addition, TTE is crucial in identifying cardiac structural changes following TAVR. Specifically, long‐term aortic stenosis commonly results in left ventricular hypertrophy; TAVR alleviates the left ventricular afterload and promotes left ventricular mass regression. 79 It is therefore important to monitor these changes via echocardiographic assessment as they have been shown to be associated with patient rehospitalisation and survival rate following TAVR. 79 , 80 Vascular complications The process of achieving vascular access for TAVR is a major cause of complications, such as pseudoaneurysm, haematoma, dissection, and perforation. 81 As the procedure involves puncturing a hole in the artery to gain access, a closure device is required to seal the arteriotomy at the end of the procedure; failure of these closure devices will predispose patients to bleeding, which can be life‐threatening. 81 A meta‐analysis, which defined outcomes in accordance with the Valve Academic Research Consortium criteria, found that TAVR with first‐generation valves resulted in a major vascular complication in approximately 12% of patients, as well as an approximately 16% rate of life‐threatening bleeding. 82 Notably, vascular complications and major bleeding are independently associated with poorer outcomes including a higher incidence of death at 30 days and 1 year, alongside an increased likelihood of hospitalization at 1 year. 83 Despite the high incidence of vascular complications in early studies, this appears to be decreasing over time, with recent trials reporting rates below 5%. 52 , 53 Along with increased operator experience, a combination of factors has led to this improvement. First, newer generation TAVR devices possess smaller sheath diameters and flexible delivery systems, which reduces the degree of vascular trauma experienced during the procedure. 34 A 2020 meta‐analysis revealed that significantly lower rates of vascular complications were associated with newer generation devices (5.42 ± 4.75%) compared to first‐generation devices (11.52 ± 7.23%). 84 The other key factor has been the use of MDCT to assess the suitability of the peripheral vasculature, as this imaging modality enables the identification of important vascular pathologies at the access site, such as heavy calcification, which may increase the risk of complications. 85 MDCT is also used to measure the minimal luminal diameter, which allows for the sheath to femoral artery ratio (SFAR) to be calculated 85 ; high SFAR is a major risk factor for vascular complications in TAVR. 86 Bioprosthetic valve failure TAVR is performed using bioprosthetic valves, and thus patients may experience complications as a result of structural valve deterioration and eventual bioprosthetic valve failure. 87 Although patients with a high life expectancy are usually offered surgery instead of TAVR due to concerns about durability, a significant number of TAVR patients still experience these complications during their lifetime. For example, a meta‐analysis found that the pooled incidence of structural valve deterioration in TAVR patients at 1 year was 4.93%. 88 Given that the indications for TAVR are being expanded further to low‐risk patients with a longer life expectancy, 13 , 14 bioprosthetic valve failure could soon become a much bigger issue. The current concerns regarding TAVR durability may be resolved as the devices continue to be improved with each generation. 89 Otherwise, failed bioprosthetic valves can be managed with redo‐SAVR, 90 but the high surgical risk profile of many TAVR patients limits the feasibility of this option. Valve‐in‐valve TAVR, which involves implanting a second device inside the initial prosthesis, has shown potential as an alternative to redo‐SAVR, with meta‐analyses highlighting similar mortality rates with the two procedures. 91 , 92 Moreover, the valve‐in‐valve procedure may be associated with a lower incidence of major bleeding and stroke, thus increasing its suitability for patients with a high surgical risk. 92 Paravalvular aortic regurgitation Paravalvular aortic regurgitation is commonly seen in patients following TAVR but refinements in the procedure and prosthesis design have dramatically reduced its incidence. 84 According to a meta‐analysis conducted by Winter et al., 84 its occurrence has decreased from around 12% in first‐generation devices to about 2% in more recent studies. Although most cases are mild in severity, a significant proportion of patients experience moderate to severe regurgitation which are associated with increased mortality rates. 93 , 94 During TAVR, the prosthetic valve is often superimposed onto its diseased counterpart; this means that the procedure frequently yields an insufficient seal that allows blood to “leak” around the bioprosthesis. 95 Furthermore, patients who need TAVR often come from older age groups and they therefore frequently co‐present with calcified aortic valves. 96 Valve calcification interferes with the expansion and apposition of the bioprosthetic when it is being deployed, ultimately predisposing patients to paravalvular regurgitation. 97 As aforementioned, preprocedural assessment is vital for mitigating the risk of paravalvular regurgitation, and the addition of outer skirts to newer‐generation valve systems has been a useful innovation. Additional periprocedural interventions to reduce the degree of paravalvular leakage include balloon postdilation to expand the valve further and achieve a better seal, which has shown promise for balloon‐expandable and self‐expandable valves. 98 , 99 Paravalvular regurgitation may otherwise be due to suboptimal positioning of the valve and, in certain cases, this can be treated by implanting a second valve using the valve‐in‐valve method. 100 An Italian registry analysis concluded that the valve‐in‐valve technique is an effective option for managing acute paravalvular leakage without necessitating surgery. 100 Cerebrovascular events (CVEs) CVEs following TAVR are not uncommon, and they pose a significant clinical challenge for management. Patients are most vulnerable to CVEs during and immediately following the procedure (<24 h) 101 ; importantly, CVEs during the first 30 days of recovery is associated with greater 30‐day postprocedural mortality, with some studies reporting a greater than sixfold increase in risk. 102 , 103 A crude generalization is that acute (≤24 h) and subacute (<30 days) CVEs are more likely to be associated with the TAVR procedure itself, during which debris can be displaced from the valve or the blood vessels. 101 Conversely, CVEs that occur >30 days following the procedure are often associated with long‐standing comorbidities, such as chronic atrial fibrillation and peripheral vascular disease. 101 However, current data on periprocedural CVEs frequently omit the timing at which these events have occurred, restricting the scope for further analysis. The need to prevent periprocedural CVEs has inspired the design of cerebral embolic protection (CEP) systems such as the Sentinel. 104 and Emblok 105 devices. These are both filters which permit blood flow from the aorta, whilst removing embolic debris to prevent CVEs. A 2020 in‐man pilot study on 20 participants concluded that the Emblok system seems to be safe, and that the procedure is achievable, 105 although larger studies should be conducted. The Sentinel CEP system is more established, gaining FDA approval in 2017, 104 but a 2022 randomized control trial found that the use of this device did not significantly reduce the likelihood of stroke. 106 Antithrombotic therapy is another vital component of CVE prevention in the postprocedural period. The most recent guidelines recommend the use of low dose aspirin monotherapy or dual antiplatelet therapy with aspirin and clopidogrel for 3–6 months following TAVR. 13 However, TAVR patients can also experience bleeding complications, and hence the patient profile should influence the antithrombotic regimen. The POPular TAVI trial has provided some valuable insights regarding when to use different antithrombotic strategies. 107 , 108 The study found that for patients with no indication for anticoagulation, aspirin monotherapy reduced the risk of bleeding compared to a combination of aspirin and clopidogrel, without increasing the incidence of ischaemic events. 107 Meanwhile, patients who had an requirement for oral anticoagulation suffered fewer severe bleeding events when receiving oral anticoagulation alone instead of oral anticoagulation with clopidogrel, and again there was no significant increase in major ischaemic events. 108 Conductive abnormalities Arrhythmias often result from the direct injury of the cardiac conduction tissue during the TAVR procedure, with some patients needing postintervention PPM implantation. 109 The implanted prosthesis requires slight oversizing to ensure adequate anchorage and minimize the risk of postprocedural paravalvular aortic regurgitation. 110 Nonetheless, it is often difficult to gauge the optimal degree of oversizing needed and “overshooting” could consequently traumatize adjacent conductive tissue. 110 Although prior studies investigating the intersex differences in post‐TAVR complications have been conflicting, 111 , 112 a recent meta‐analysis has shown that men are more at risk of needing PPM implantation. 61 This finding by Ullah et al. 61 could be partly attributed to the greater preexisting comorbidity burden that male TAVR patients often present with. 113 , 114 Male patients also tend to receive implants of larger sizes as they generally have larger aortic annulus diameters compared to women, placing men at increased risk of conductive complications. 113 , 114 Pre‐existing conductive abnormality (acquired before TAVR) is another significant predictor of PPM implantation. 115 , 116 , 117 In an analysis of the Placement of Aortic Transcatheter Valve (PARTNER) 1 trial, patients who received PPM implantation after TAVR were nearly four times more likely to have had pre‐existing right bundle‐branch block and nearly twice as likely to have had pre‐existing left anterior fascicular block when compared to patients who did not require PPM implantation 118 ; similar findings were also reported in subsequent studies. 61 On the other hand, some risk factors for PPM implantation can be controlled, such as the choice of prosthesis. 61 Self‐expanding valves are traditionally associated with a greater risk of conductive abnormalities when compared to its balloon‐expandable counterpart. 84 , 119 , 120 The reported incidence of PPM implantation post‐TAVR has ranged between 5% and 10% for the Edwards SAPIEN balloon‐expandable valves and approximately 25% for the self‐expanding Medtronic CoreValve. 40 , 121 However, the more recent SAPIEN 3 valve has been reported by some to exhibit higher rates of PPM dependency when compared to its predecessors. Despite reducing the risk of paravalvular aortic regurgitation, the design addition of an outer sealing skirt (Figure 2 ) likely increases the radial force exerted on surrounding cardiac tissue and predisposes patients to atrioventricular conductive disturbances. 122 Recently, several studies have investigated whether the risk of PPM implantation can be mitigated by modifying the conventional TAVR procedure. Firstly, Sammour et al. 123 demonstrated a novel, systematic method for deploying balloon‐expandable valves, and their findings suggest that reducing the depth of valve implantation from ~3.2 to ~1.5 mm may overcome the issues seen with the SAPIEN 3 valve. This novel method decreased the rate of PPM implantation at 30 days from 13.1% to 5%, and the incidences of complete heart block and left bundle branch block were also significantly lower. 123 The results of this study inspired the use of a cusp‐overlapping technique for self‐expandable TAVR to achieve a high implantation position, and again the rates of PPM implantation were consequently lower. 124 It should be noted that high implantation may theoretically increase the risk of valve embolisation and aortic regurgitation, 125 but overall the safety profile of the cusp‐overlapping technique appears to be similar to the conventional approach. 124 , 126 MAJOR TAVR TRIALS Several pivotal trials have compared TAVR with the conventional SAVR to evaluate factors such as efficacy and safety. These trials can also be classified based on the surgical risk profile of the selected patients, using the mean STS‐PROM score. 127 For example, the patient that was selected for the first TAVR was considered to be a “last‐resort” case 6 and, for the first few years, TAVR was exclusively performed on patients who were unsuitable for surgery. 128 Due to innovations in technology, increased data and operator experience, there has been a growing interest in assessing the efficacy of TAVR in patients with lower risk profiles. Therefore, the participants in the PARTNER 1A 35 , 129 and 1B trials 130 , 131 which began in 2007 had a higher risk profile than those assessed in the PARTNER 3 trial 52 , 132 several years later. Table 2 summarizes the landmark TAVR trials that have been published to date. The trials which fall under the extreme‐risk category consist of the PARTNER 1B 130 , 131 and CoreValve Extreme Risk (CoreValve ER) 133 trials. The participants of these trials were deemed inoperable, so TAVR was compared with standard medical therapy or an objective performance goal. The PARTNER 1B trial found that TAVR performed with the first‐generation Edwards Sapien heart‐valve system superseded standard medical therapy with an absolute risk reduction in all‐cause mortality of over 20% after 1 130 and 5 years, 131 despite the increased incidence of neurovascular events at 30 days and 1 year. 130 This positive finding was supported by the single‐arm nonrandomised CoreValve ER trial which investigated the Medtronic self‐expanding CoreValve prosthesis instead. 133 In this study, the rate of all‐cause mortality or major stroke following TAVR was 26.0% after 1 year and 38.0% after 2 years, but these negative outcomes were primarily due to the participants’ co‐morbidities rather than valve performance. 133 The PARTNER 1A 35 , 129 and CoreValve U.S. Pivotal High Risk (CoreValve HR) 36 trials assessed the efficacy of TAVR in patients from the high‐risk category (STS‐PROM ≥8%). In these trials, SAVR was considered a viable alternative and, therefore, it was possible to compare TAVR with SAVR. A total of 699 patients were enrolled into the PARTNER 1A trial which assessed the balloon expandable Edwards Sapien valve, and the results showed TAVR to be noninferior to SAVR in terms of all‐cause mortality at 30 days, 1 35 and 5 years. 129 However, TAVR was associated with a greater incidence of CVEs than SAVR after 30 days and 1 year. 35 Although the differences in risk diminished after 5 years, 129 these early findings highlighted the potential room for technological (e.g., embolic protection devices) and procedural adaptations to improve patient outcomes in the years to come. Following PARTNER 1A, the randomized CoreValve HR trial found that the rate of 1‐ and 2‐year all‐cause mortality was significantly lower in the TAVR group than the SAVR group, making it the first trial to show the potential superiority of TAVR over conventional surgery. 36 As TAVR became more widely approved, there was a growing desire to expand its indications to lower risk groups. Consequently, the risk profile of trial participants started to decrease. The most notable trials in the intermediate‐risk category (STS‐PROM ≥4%) are the PARTNER 2 134 and Surgical Replacement and Transcatheter Aortic Valve Implantation (SURTAVI) 135 trials, with mean STS‐PROM scores of 5.8% and 4.5%, respectively. The primary finding from the PARTNER 2 trial (Edwards Sapien XT valve) was that the rate of all‐cause mortality or disabling stroke at 2 years from TAVR and SAVR was similar, although there was a lower incidence of major vascular complications and paravalvular aortic regurgitation in the surgical patients. 134 Further analysis of the trial published in 2020 concluded that the 5‐year outcomes of both procedures were also similar in regard to mortality and disabling stroke; however, re‐hospitalizations and aortic valve re‐interventions were more common after TAVR than SAVR. 136 Meanwhile, the SURTAVI trial supported these findings, albeit using the CoreValve and Evolut R bioprostheses instead. 135 Last, the use of TAVR in patients with a low surgical risk profile (STS‐PROM <4%) was evaluated by the PARTNER 3, 52 , 132 Evolut Low Risk (Evolut LR) 53 , 137 and Nordic Aortic Valve Intervention (NOTION) 138 , 139 , 140 trials. The PARTNER 3 trial 52 , 132 randomly assigned 1000 patients to either TAVR, performed using the Edwards Sapien 3 system, or SAVR. Analysis showed that the rate of the composite of death, stroke, or rehospitalisation after 1 52 and 2 years 132 was significantly lower in the TAVR group than with SAVR. After 5 years, the incidence of this composite endpoint was similar in the two groups. 141 Contrary to previous studies, 35 , 134 TAVR was not associated with an increased incidence of moderate or severe paravalvular regurgitation in the PARTNER 3 trial. 52 , 132 Additionally, the TAVR patients had shorter index hospitalizations suggesting that the procedure could be cost‐effective. 52 The Evolut LR trial also found that TAVR with either the CoreValve, Evolut R or Evolut PRO was noninferior to SAVR based on the composite endpoint of death or disabling stroke after 2, 53 3, 137 and 4 years, 142 with the difference between the two groups increasing over time in favor of TAVR. 143 The longest follow‐up data comparing TAVR and SAVR in low surgical risk patients was collected in the NOTION trial, which reported no statistically significant differences in all‐cause mortality, stroke and myocardial infarction at 5 138 and 10 years. 140 Moreover, TAVR appears to demonstrate lower rates of moderate or greater structural valve deterioration and bioprosthetic valve dysfunction after 10 years compared to SAVR, according to data presented in 2023. 140 FUTURE DIRECTIONS Over the best part of two decades, the TAVR procedure has evolved tremendously and is becoming a safe and viable alternative to surgery. The popularity of TAVR is likely to accrue over time because of increasing data and new technological innovations. Examples of this include the aforementioned development of CEP devices to address the challenge of peri‐procedural CVEs. 104 , 105 Furthermore, in response to the higher reported incidence of paravalvular regurgitation following TAVR than SAVR, 36 , 129 , 134 solutions are being worked on such as novel “occluders” which are placed around the prosthetic valve to prevent leakage, and these have shown potential in vitro. 144 TAVR may also become more popular because of expanding indications. Although an initial study evaluating the use of TAVR for pure native aortic regurgitation found that many patients required second valve implantation or experienced residual regurgitation, 145 newer generation devices have shown more positive results. 146 Furthermore, aortic valve replacement has typically been reserved for patients with severe aortic stenosis, 13 , 14 but recent observational data has suggested that moderate aortic stenosis is enough to significantly worsen patient mortality, especially when left ventricular ejection fraction is reduced. 147 , 148 These findings have sparked interest into a potential role for TAVR in the management of nonsevere aortic stenosis, with retrospective studies showing promising results. 148 , 149 This topic is being further explored by the ongoing TAVR UNLOAD, 150 Evolut EXPAND TAVR II Pivotal 151 and PROGRESS 152 randomized trials. Valve durability is still a concern regarding the use of TAVR in low‐risk patients. A study estimated that in low‐risk patients with a mean age of 73, TAVR valves need to be at least 30% as durable as surgical valves to prevent a reduction in life expectancy; however, this threshold is higher for younger patients. 153 Future long‐term data on valve durability is required to improve confidence over the use of TAVR in low‐risk patients from younger age groups. Another key challenge is identifying the optimal post‐TAVR anticoagulation regimen. 154 Oral anticoagulants are currently the standard therapy, 154 but trials are being conducted to compare direct oral anticoagulants with vitamin K antagonists. 155 , 156 Establishing clarity over this aspect of postprocedural care will be an important future objective. The growing success of TAVR naturally raises the question: can transcatheter techniques be used to replace other valves? The mitral valve is oval‐shaped and saddle‐like so developing a transcatheter mitral valve replacement (TMVR) has been difficult. A meta‐analysis conducted by Takagi et al. 157 found that TMVR was associated with elevated mortality compared to predicted operative mortality, but a key limitation was that the included studies lacked the control of conventional mitral valve surgery. In January 2020, Abbott's Tendyne TMVR device became the first to receive a CE mark and subsequent approval for use in Europe. 158 Transcatheter approaches for the replacement of tricuspid 159 , 160 and pulmonary 161 valves are being developed and studied as well, but they are both significantly behind TAVR in terms of progress to date. CONFLICT OF INTEREST STATEMENT The authors declare no conflicts of interest.
DATA AVAILABILITY STATEMENT Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
CC BY
no
2024-01-16 23:43:48
Clin Cardiol. 2024 Jan 14; 47(1):e24209
oa_package/31/e8/PMC10788655.tar.gz
PMC10788658
38226133
Introduction Neisseria meningitidis is a Gram-negative diplococcus that may be found as a benign commensal bacterium in the human nasopharynx in approximately 10% of healthy individuals [ 1 - 3 ]. In some cases, it can cause invasive diseases, such as bacteremia, meningitis, or septic arthritis. Mortality rates can be extremely high in untreated cases, reaching 80%, while treated cases range from 4 to 20% [ 1 - 3 ]. So far, twelve serogroups have been identified, six of which (A, B, C, W, X, and Y) are responsible for almost all cases of invasive meningococcal disease (IMD) [ 1 - 2 ]. A recent meta-analysis [ 2 ] quantified the significant risk of developing IMD in the presence of some factors, such as HIV infection, active or passive smoking, and crowded living environments. A Chinese meta-analysis recently estimated an annual incidence of 0.66-2.30 per 100,000 habitants [ 3 ]. To prevent the spread of resistant strains, it is of utmost importance to monitor antibiotic susceptibility regionally and globally [ 4 ]. Meningococcal septic arthritis (MSA) is an uncommon manifestation of IMD that occurs when N. meningitidis bacteria is responsible for the colonization of synovial joints. Colonization can also occur following surgery or penetrative trauma [ 5 - 11 ]. Primary MSA usually presents with mild symptoms such as localized pain, inflammation, and reduced range of movements of large joints. Several case reports [ 5 - 10 ] and a retrospective study [ 11 ] have been reporting several cases of monoarticular involvement. Despite being rarer, polyarticular presentations have also been reported [ 10 - 11 ]. Treatment of MSA typically involves arthroscopic washout or aspiration or surgical debridement and antimicrobial administration [ 5 - 11 ]. The occurrence of MSA is more frequent in immunocompromised patients. The association between IMD, particularly MSA, and the diagnosis of multiple myeloma is exceedingly rare, with only a few cases having been reported so far [ 12 - 14 ].
Discussion N. meningitidis infection occurs rarely in immunocompetent hosts, but, in some cases, it can cause invasive diseases such as bacteremia or septic arthritis. The risk of developing invasive meningococcal disease is greater in HIV-infected patients, active or passive smokers, and people living in crowded areas [ 1 - 4 ]. The initial clinical picture raised suspicion of an infection of the right knee prosthesis. Unfortunately, the risk of draining knee synovial fluid and obtaining early microbiological cultures delayed the identification of the bacterial agent. Meningococcal septic arthritis can occur either by direct colonization during surgery or trauma or by secondary colonization in patients with bacteremia [ 5 - 11 ]. Considering the recent knee surgery, the most probable scenario is that the source of this infection was knee prosthesis colonization, with later bacteremia and ultimately hematogenous dissemination to the left shoulder. Microbiological cultures performed after left shoulder fluid drainage were negative most likely because of antimicrobial therapy already in progress for several days. During the etiological investigation of possible immunosuppressive disorders, we identified a monoclonal immunoglobulin G/lambda peak. Later, the bone biopsy confirmed the diagnosis of multiple myeloma. This finding may help explain why this patient with no known risk factors developed invasive meningococcal disease.
Conclusions Microbiological cultures performed after draining fluid from the left shoulder were negative, most likely due to the ongoing antimicrobial therapy for several days. The identification of specific bacteria, such as N. meningitidis , warrants further investigation that can lead to an early diagnosis of hidden conditions with a significant prognostic impact.
Meningococcal invasive disease is rare in immunocompetent hosts but may occur in patients with risk factors. Septic arthritis is an uncommon form of presentation and is usually due to surgical colonization or hematogenous dissemination. We present a case of a 73-year-old woman, who recently underwent knee replacement surgery, presenting with right knee and left shoulder pain, swelling, and reduced range of motion. Antibiotic therapy was promptly initiated, and the identification of invasive meningococcal disease with septic arthritis was possible through blood cultures and synovial fluid analysis.
Case presentation We present the case of a 73-year-old woman who presented to the emergency department with right knee pain and swelling and reduced range of motion for the past four days. She had a medical history of hypertension and hypothyroidism. Additionally, she underwent knee replacement surgery three months prior on the right side. On physical examination, she had a fever (body temperature of 38.7 degrees Celsius), blood pressure of 117/67mmHg, heart rate of 77 beats per minute, and oxygen saturation level of 98% at room air. The patient had an erythematous, warm, right knee with edema of the ipsilateral leg. Active and passive movements of the joint were greatly diminished because of intense pain. An erythematous maculopapular rash was noted on both legs and arms. The patient also had slight pain with active and passive movements of the left shoulder. Initial blood tests revealed leucocytosis (16.970/uL) with neutrophilia (79%) and an elevated c-reactive protein level of 17.58 mg/dL and an erythrocyte sedimentation rate of 116 mm for the first hour. There was no elevated blood urea nitrogen (BUN), and liver function tests were unaffected (Table 1 ). A right leg ultrasound showed articular effusion compatible with septic arthritis (Figure 1 ). The case was debated with the orthopedic team, and an articular ultrasound was performed to further characterize the effusion and guide drainage of articular fluid. However, the effusion was too small to perform a safe arthrocentesis. Blood cultures and serologic tests were collected, and empirical antibiotic therapy with ceftriaxone and vancomycin was started. The patient was admitted to the Internal Medicine Department. During the initial days at the Internal Medicine department, the macular rash and fever resolved. Nevertheless, left shoulder pain got worse evolving with edema and motion limitation. Meanwhile, blood cultures allowed the detection of N. meningitidis bacteremia. The extensive serological panel was negative. The identification of IMD permitted antimicrobial treatment adjustment. To prevent the spread of meningococcal infection, droplet precautions and antimicrobial chemoprophylaxis in close contact were implemented. Ultrasound-guided arthrocentesis of the left shoulder effusion (Figure 2 ) was performed, revealing a leucocyte count of 115,504/uL with 92.5% polymorphonuclear leucocytes. The culture of articular fluid, however, was negative. Invasive meningococcal disease with polyarticular affection was assumed, and antimicrobial treatment was extended because of the presence of a knee prosthesis. The presence of IMD in a previous immunocompetent host warranted further investigation of possible immunosuppressive disorders. Human immunodeficiency virus (HIV), Hepatitis B virus (HBV), and Hepatitis C virus (HCV) serologies were negative. However, her serum immunoglobulin G level was high at 6,470.0 (mg/dL), and protein electrophoresis revealed a monoclonal G peak (6.02 g/dL). Urine immunofixation confirmed the detection of a monoclonal immunoglobulin G/lambda, with a value of 6.61 mg/dL, indicating a significant presence of M protein. The patient evolved favorably with a resolution of fever, reduction of inflammatory markers, and joint mobility recovery. Six weeks later, the patient underwent a surgical intervention to explant right knee prosthesis and debridement. A bone biopsy performed during surgery confirmed multiple myeloma. The patient underwent four weeks of antibiotic therapy in preparation for a second surgical intervention, where a new knee prosthesis was implemented.
CC BY
no
2024-01-16 23:43:48
Cureus.; 15(12):e50555
oa_package/42/b1/PMC10788658.tar.gz
PMC10788659
38226095
Introduction Minimizing viral transmission during aerosol-generating procedures (AGPs) has become critical to healthcare worker safety during the coronavirus disease 2019 (COVID-19) pandemic. Therapeutic respiratory medication can be administered via a variety of nebulizers that generate aerosolized droplets of the medicine. Fugitive emissions of these medical aerosols, as well as bioaerosols generated in respiratory exhalations, can travel significant distances from the source and remain in the air for several minutes, depending on multiple environmental factors (temperature, humidity, airflow, etc.) [ 1 ]. These aerosols, therefore, pose the risk of secondary exposure to bystanders and healthcare workers [ 2 ]. Furthermore, the administration of nebulization therapy to patients with respiratory diseases may expose healthcare workers to fugitive aerosols that could potentially contain pathogens. There are transmission risks associated with such exposure, particularly during the COVID-19 pandemic. Tran et al. [ 3 ] cite studies related to the first severe acute respiratory syndrome coronavirus (SARS-CoV) outbreak, two of which reported nosocomial transmission potentially related to nebulization and one that showed otherwise. Transmission was attributed to prolonged exposure to infected individuals and a lack of infection control measures. A more recent review by Goldstein et al. [ 1 ] that included SARS-CoV-2 and non-coronavirus (influenza) studies suggests that the risks could not be ruled out given such inconclusive evidence. The World Health Organization (WHO) [ 4 , 5 ] and Centers for Disease Control and Prevention (CDC) [ 6 ] recommend that healthcare workers wear appropriate personal protective equipment (PPE) such as a respirator, eye protection, gown, and gloves when conducting AGPs on patients with respiratory infections [ 3 ]. In the setting of providing nebulized therapies to COVID-19-infected people, several publications have provided guidance and practical strategies over the last two years [ 7 - 12 ]. They mostly advocate for appropriate consideration for the inhaled medication delivery method, including the type of nebulizer (standard jet, vibrating mesh, and breath-actuated devices) and breathing interfaces (mouthpiece versus a facemask) [ 7 , 8 , 13 - 15 ]. In this study, we examine, through visual observation and particle count measurement, in a laboratory simulation, the emission of fugitive aerosols from a jet nebulizer and their attenuation through the use of a filtered mouthpiece. A previous study, involving visualization of airflow during nebulization of sterile water, has been reported by Hui et al. [ 16 ]. In that study, leakage of exhaled air through the side vents of a jet nebulizer connected to a human patient simulator (HPS) was visualized using tracer smoke particles that were continuously introduced into the HPS lung as part of the inhaled air. In other studies, scholars have significantly decreased particle generation during nebulizer therapy by adding a viral filter [ 17 , 18 ]. Here we provide visual observations, backed by particle count measurements, of the results, including both temporal and spatial variations of the concentrations of the fugitive aerosols for unfiltered and filtered jet nebulizer mouthpieces. In contrast, in the present study, fugitive emissions of aerosolized droplets of nebulized saline, representative of therapeutic medical aerosols, are directly visualized and concentration levels of the fugitive aerosols in the vicinity of and at various distances from the emission source are measured using a set of particle counters. The impact of fitting the nebulizer with a filter on the fugitive emissions is examined. The observations are exclusive of bioaerosols that the patient may have generated during exhalation.
Materials and methods A schematic illustration of the experimental setup is shown in Figure 1 . A mannequin manufactured by Only Mannequins (Model: 50013) attached to a custom bellows-driven breathing simulator was used to represent an adult patient per United States Pharmacopeia Chapter <1601> (Products for Nebulization). The bellows-based simulator was custom-built and calibrated in the College of Engineering at Florida Atlantic University. An electrically controlled motor drives a 500 mL volume bellows chamber to simulate the breathing. The power supply to the simulator was calibrated to provide the required breathing cycle rates. A simulated breathing rate of 15 breaths per minute with a tidal volume of 500 mL and an inspiratory-to-expiratory ratio of 1:1 was used. A standard jet nebulizer (PARI LC Sprint) was attached to the mannequin by using a mouthpiece interface. The nebulizer was driven by a compressor (PARI Trek S) delivering airflow at 4 L/min to the nebulizer reservoir, which was filled with normal saline [0.90% weight per volume (w/v) of NaCl]. The breathing simulator did not draw air from any source other than the nebulizer mouthpiece during inhalation; this was ensured by tightly sealing the mouthpiece with tape. Saline aerosols were drawn into the mannequin via the mouthpiece interface during the inhalation phase of the breathing cycle and emitted through the expiratory valve port on the mouthpiece during the exhalation phase. The emitted aerosols were visualized in a thin sheet of green laser light (532 nm wavelength) positioned in front of the mannequin, with the plane of the sheet coincident with the sagittal plane, and the observations were captured using a high-definition video camera. The camera was placed so that its view was either normal to the light sheet or at an angle so that it faced the laser light source. The latter placement allowed optimization of the imagery since the droplets scatter a significant portion of the light away from the source, as well as facilitate visualization of the emissions to a further distance away from the mannequin. The visual observations were complemented by particle count measurements using N3 optical particle counters (OPCs) from Alphasense Ltd. One OPC was placed at Station 1, located 40 cm ahead and 30 cm to the side of the mannequin, corresponding to where a possible caregiver administering the nebulization may be positioned. OPCs were also placed on tripods located at three observation stations in the sagittal plane downstream of the mannequin as indicated in Figure 1 (Inset A). Stations 2-4 were located 40, 80, and 160 cm respectively from the mannequin. These parameters were selected to characterize the spread of the aerosols at a distance away from the source. The measurements were made simultaneously and synchronized using a single computer for data acquisition from the four sensors. Each OPC recorded particle counts in 24 bins of diameter sizes d i ( i = 1 to 24), ranging from 0.4-40 micrometers over one-second time intervals using sampling flow rates of Q s = 4.7 x 10 -6 m 3 /s. Three sets of tests were conducted with the jet nebulizer, one using a standard unfiltered mouthpiece with an expiratory valve port (Figure 1 ) and the other with a mouthpiece fitted with and without an exhalation filter-adaptor (PARI Filter and Valve Set for LC Nebulizer; Figure 1 , inset) [ 18 ]. The study was carried out over 2021-2022. In each case, 3 mL of saline was nebulized (15-20 minutes). Control runs of the compressor and the breath simulator were conducted in the absence of saline before and after nebulization to detect any spurious aerosols present in the system. The nebulization runs were replicated twice.
Results The results of the visualization of the fugitive aerosol emissions are shown in Figures 2 - 4 . Figures 2a - 2c show the results of the first set of tests using the standard mouthpiece with the expiratory valve port. At each exhalation, a jet of aerosolized droplets was emitted from the exhalation valve on the mouthpiece. The shape of the exhalation valve directed the jet upwards over the mouthpiece. The momentum of the jet then resulted in a three-dimensional spread in the form of a turbulent puff, carrying the fugitive aerosols ahead and above the mannequin, initially extending 0.6 to 0.8 m from the mannequin (Figure 2b ). However, the aerosolized droplets, given their small size and relatively large surface area compared to their weight, persisted in the air for several minutes and were convected further away from the mannequin in the ensuing airflow accompanying the spread of the droplets in the room. The ambient temperature and relative humidity in the room were 22 o C and 47%, respectively, with minimal air exchange. Over the 15- to 20-minute period of operation, the fugitive aerosols emitted in the exhaled air were observed to spread into the open space ahead and above the mannequin, extending to over 2 m in front of it. Figure 2c depicts the view looking toward the light source that enables the best observation of the dispersion of the aerosolized droplets in the light sheet. The results of the second set of tests using the mouthpiece fitted with the PARI filter-adaptor are shown in Figures 3 - 4 . When the filter-adaptor was removed (Figure 3b ), the aerosolized droplets were emitted in a jet of air during each exhalation as before, but with the jet directed at a steeper upward angle by the shape of the expiratory port (Figure 3c ). When the filter-adaptor was connected to the expiratory port (Figure 4a ), fugitive aerosols were no longer observed (Figure 4b ). The filter-adaptor captured aerosols in the exhaled breath and significantly reduced the emission of fugitive aerosols during nebulization. Only minor emissions were observed over the inspiratory valve cap (Figure 4c ). The results of the aerosol particle count measurements using the OPCs are presented in Figures 5 - 7 . Figure 5 depicts the time series of the measured particle counts for a range of particle sizes at each of the four observation stations for the cases of (a) a regular jet nebulizer without a filter, (b) a jet nebulizer equipped with a PARI filter but with the filter cap removed, and (c) a jet nebulizer with the PARI filter, including the filter cap. The time series have been lowpass filtered at 1/40 Hz. As can be seen, in the absence of the filter, the fugitive aerosols spread into the open space, the aerosol levels decaying with distance from the emission source, albeit slowly, as the plume expands and engulfs ambient air. The observations at Stations 1 and 2 are similar, highlighting the three-dimensional spread of the fugitive aerosols. Furthermore, the spread of the aerosols in the unfiltered cases (a) and (b) show similar patterns. Decay of the particle count with time beyond 20 minutes represents observations following sputter. The particle counts in the filtered jet nebulizer case (c) dramatically illustrate the effectiveness of the PARI filter in filtering out the aerosols, with the particle count measurement at the observation stations corresponding to ambient levels over the entire period of nebulization. Comparisons of the associated aerosol concentrations between cases a-c at the four stations are provided in Figure 6 . The figure further illustrates the effectiveness of the filter and the features described above. The concentrations are based on taking into account the aggregate volume of the aerosol particles, regarding them to be spherical. Between cases (a) and (b), the aerosol spread associated with (a) appears to have greater variability with significantly higher peaks in particle count, presumably since the direction of the emitted aerosol from the standard mouthpiece (Figure 2 ) is directly towards the OPCs, whereas in the case of the mouthpiece without the filter cap (Figure 3c ) is upwards. Finally, the total particle count as a function of particle size for cases (a)-(c) is compared at each station in Figure 7 . As may be expected, the total count of the smallest aerosols dominates, and the total count drops off with an increase in the size of the aerosols. Furthermore, the counts reduce with the distance from the source of emission.
Discussion The present study involves a specific (PARI) jet nebulizer with and without the placement of a filter-adaptor on the expiratory valve of its mouthpiece. The study was limited to the consideration of the dispersion of aerosolized droplets of normal saline generated as part of nebulization so that it can be considered to represent medical aerosols. The lack of movement from the simulator might impact the aerosols' distribution and the mask's fitting. Bioaerosols generated within the patient’s respiratory system were not considered. The visualization described here captures the dispersion of the fugitive emissions in the plane of the laser sheet, aligned with the sagittal plane. Corresponding particle count measurements provide temporal and spatial distributions of the aerosol concentration levels over the time and locations considered. However, the exhaled jet of air and aerosolized droplets spread three-dimensionally in front of and above the mannequin. Visual observation and aerosol count measurement of such a spread can be facilitated by using multiple laser sheets and several particle counters, respectively. The method of visualization can be utilized to characterize fugitive emissions of medical aerosols from other nebulizers and to evaluate the effectiveness of other types of fugitive emission mitigation devices [ 15 , 16 ]. The results of the flow visualization and particle count measurement show that aerosolized droplets leak significantly from the expiratory valve of a standard mouthpiece of a jet nebulizer during simulated exhalation. The droplets spread in front of and above the mannequin, remaining suspended in the air for minutes and stretching to over 2 m from the mannequin over the 15- to 20-minute period of the nebulization operation. Replacing the standard jet nebulizer mouthpiece with one that includes a PARI filter-adaptor suppresses this leakage; only low levels of fugitive emissions from the inspiratory valve cap on the nebulizer are apparent near the cap. In this case, particle count measurements, unlike the standard mouthpiece, show that the associated fugitive aerosol concentration levels are not elevated above the ambient level. Thus, our findings show that a filter-adaptor on the expiratory valve of the mouthpiece of a jet nebulizer is effective in suppressing emissions of fugitive aerosols from the valve during nebulization. The addition of a filtered mouthpiece may decrease the risk of secondary exposure to bystanders and healthcare workers to respiratory pathogens. Future areas of study include the effects of room ventilation, including negative and positive airflows. In addition, the study can be repeated to characterize the spread of fugitive aerosols for other types of nebulizers. Here, we provide visual observations, backed by particle count measurements, of the results, including both temporal and spatial variations of the concentrations of the fugitive aerosols for unfiltered and filtered jet nebulizer mouthpieces.
Conclusions Our study adds to the evidence of the risks that healthcare workers face from exposure to respiratory pathogens while administering nebulized therapies. The results visually highlight the effectiveness of using a filtered mouthpiece in suppressing the fugitive aerosols and identify an approach for limiting the occupational exposure of healthcare workers to these emissions.
Background and objective The risk of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission from patients with coronavirus disease 2019 (COVID-19) during nebulization is unclear. In this study, we aimed to address this issue. Methods Fugitive emissions of aerosolized saline during nebulization were observed using a standard jet nebulizer fitted with unfiltered and filtered mouthpieces connected via a mannequin to a breathing simulator. Fugitive emissions were observed by using a laser sheet and captured on high-definition video, and they were measured by using optical particle counters positioned where a potential caregiver may be administering nebulization and three other locations in the sagittal plane at various distances downstream of the mannequin. Results The use of a standard unfiltered mouthpiece resulted in significant emission of fugitive aerosols ahead of and above the mannequin (spread over 2 m in front). A mouthpiece with a filter-adaptor effectively suppressed the emissions, with only minor leakage from the nebulizer cup. Particle count measurements supported the visual observations, providing total particle count levels and aerosol concentration levels at the measurement locations. The levels decayed slowly with downstream distance. Conclusions The visualization described above captured the dispersion of emitted aerosols in the plane of the laser sheet, aligned with the sagittal plane. The particle count measurements provided temporal and spatial distributions of the aerosol concentration levels over the time and locations considered. However, the exhaled air and aerosolized droplets spread three-dimensionally in front of and above the mannequin. The results visually highlight the effectiveness of using a filtered mouthpiece in suppressing the fugitive aerosols and identify an approach for limiting the occupational exposure of healthcare workers to these emissions while administering nebulized therapies.
Writing and editorial support were provided by Hilary Durbano, PhD, of AlphaBioCom (King of Prussia, PA, USA), a Red Nucleus company, and were funded by Theravance Biopharma US, Inc.
CC BY
no
2024-01-16 23:43:48
Cureus.; 15(12):e50611
oa_package/f5/65/PMC10788659.tar.gz
PMC10788660
38226075
Introduction Pernicious anemia is an autoimmune disease characterized by vitamin B12 deficiency due to antibodies that target intrinsic factor (IF) or parietal cells, both directly involved in vitamin B12 absorption [ 1 ]. In the United States, vitamin B12 deficiency affects about 20% of individuals over 60 years old; however, the prevalence is significantly decreased in patients younger than 60 years old with only 6% affected [ 2 ]. Patients with pernicious anemia can be asymptomatic for up to 10 to 20 years prior to presentation, which can manifest as neuropsychiatric and even hematological symptoms. More commonly, patients present with macrocytic anemia in addition to fatigue, shortness of breath, and pallor [ 1 , 2 ]. Without supplementation of vitamin B12, neuropsychiatric conditions develop such as subacute combined degeneration, bilateral and symmetrical paresthesias, loss of vibratory and positional sense, and memory and mood disturbances [ 1 ]. Pancytopenia and hemolysis are rare presentations of pernicious anemia, especially in developed nations such as the United States [ 3 ]. In this case, we present a middle-aged female with severe anemia in the setting of newly discovered pernicious anemia with laboratory findings indicating hemolysis in which prompt repletion of vitamin B12 resolved said hemolysis.
Discussion The most common causes of vitamin B12 deficiency include gastrointestinal surgery (gastric bypass, ileal resection) and dietary insufficiency and less commonly from autoimmune pernicious anemia [ 3 ]. Less commonly, nitrous oxide can cause acute depletion of vitamin B12, where the majority of etiologies are chronic in nature [ 4 ]. Vitamin B12 deficiency can result in demyelination of the dorsal and lateral columns of the spinal cord causing subacute combined degeneration as evidenced by paresthesia, sensory deficits, ataxia, and weakness [ 3 ]. In patients with known vitamin B12 deficiency, common findings include anemia (37%), macrocytosis (54%), and hypersegmented neutrophils (32%); however, pancytopenia is a rare presentation and is only seen in 5% of patients [ 5 ]. Vitamin B12 is used in deoxyribonucleic acid (DNA) synthesis and is required for proper division of hematopoietic cells. Decreased levels of vitamin B12 lead to insufficient DNA replication in the setting of normal cytoplasmic/cell content replication preventing effective cell division resulting in macrocytosis [ 4 ]. This lack of maturation and arrest of development of hematopoietic cells may lead to intramedullary cell death resulting in lactate dehydrogenase being released and decreased haptoglobin as seen in our patient [ 6 ]. In cases where vitamin B12 or folate deficiency is suspected, it is important to rule out folate deficiency as treating vitamin B12 deficiency with folate can result in improved anemia with worsening neurologic symptoms [ 4 ]. MMA levels provide utility in diagnosis, such as in pregnancy when vitamin B12 levels are falsely low in the third trimester or elevated in patients with myelodysplastic syndrome [ 1 , 4 ]. However, limitations of MMA include end-stage renal disease, which can present with elevated MMA levels not accurately indicative of the severity of vitamin B12 deficiency [ 1 ]. As with our patient, pernicious anemia should still be considered with a lack of dietary and surgical causes in the patient’s history. With pernicious anemia, it is important to start treatment early. Although the anemia typically resolves in four to six weeks, neurologic symptoms can take several months to improve and in some cases be permanent [ 5 ]. Treatment of patients with pernicious anemia involves IM injections of vitamin B12 for improved bioavailability [ 7 ]. Injection of 1000 mcg of vitamin B12 is to be administered daily for one week followed by weekly injections for four total weeks, then followed by lifelong monthly supplementation with IM injections [ 8 ]. Alternatively, regular oral vitamin B12 supplementation (1000-2000 mcg) has been shown to be effective in the treatment of patients with etiologies of megaloblastic anemia [ 8 , 9 ]. A case series demonstrated variability in adequate dosing (500-2000 mcg) seen by lowering of MMA levels and vitamin B12 testing in patients with different etiologies of malabsorption, such as gastrectomy or pernicious anemia [ 8 ]. While oral supplementation in these patients may seem ineffective due to a lack of IFs, absorption is believed to be due to IF-independent mechanisms that provide adequate absorption at high enough doses [ 10 ]. Overall, the currently accepted treatment of pernicious anemia involves IM injections, with phosphate supplementation adequate for patients that can be closely monitored and followed up. This case highlights an unusual, yet potentially fatal, aspect of pernicious anemia: hemolysis. Vitamin B12 is required for the conversion of homocysteine to tetrahydrofolate, which is important for DNA production [ 11 ]. With B12 deficiencies, accumulation of homocysteine causes oxidative stress to erythrocytes leading to hemolysis as seen in vitro studies; however, the completed pathogenesis is unknown [ 12 ]. Overall, the presentation of hemolysis in vitamin B12 deficiencies is rare, consisting of 1.5% of presentations [ 3 ]. It is important to consider other causes of hemolysis in patients, such as autoimmune and genetic disorders. In this case, the patient had a negative family history, subacute timing of symptoms occurring close to the fifth decade of life, and symptoms consistent with anemia that resolved with vitamin B12 supplementation, making other etiologies less likely as causes for hemolysis. More often than not, vitamin B12 deficiency hemolytic anemia often presents as non-immune with a negative DAT. In the case above, the DAT was unable to be collected; however, DAT should be collected in the setting of hemolysis, as in pernicious anemia which is an autoimmune process. There are minimal case reports with this presentation, such as a case report in Croatia that reported megaloblastic and autoimmune hemolytic anemia [ 13 ].
Conclusions The case presents an unusual combination of megaloblastic pernicious anemia and hemolytic anemia in the setting of severe vitamin B12 deficiency. The prompt correction of hemoglobin and maintenance of hemoglobin with vitamin B12 administration strengthened our diagnosis. The importance of early recognition of hemolysis in vitamin B12 deficiency can save health costs by decreasing unnecessary workup for other etiologies of hemolytic anemia. Additionally, more research is required to delineate the type of hemolysis (immune versus non-immune), especially in the setting of autoimmune diseases like pernicious anemia. Given this, it is important to test for vitamin B12 deficiency in hemolytic anemia as it is easily corrected.
Vitamin B12 deficiency is a well-known and overall common disease. While the etiology of vitamin B12 deficiency varies from post-surgical changes to inadequate dietary consumption, pernicious anemia should be considered as it is a common cause. Pernicious anemia is an autoimmune atrophic gastritis impairing the absorption of vitamin B12. Manifestations include neurological changes, macrocytic anemia, glossitis, and nail changes. Hemolytic anemia is an unusual complication of vitamin B12 deficiency and an even more unusual initial presentation. This case identifies a patient with previously undiagnosed pernicious anemia with severe vitamin B12 deficiency compounded by hemolytic anemia as the presenting symptom. Overall, this case highlights the importance of considering vitamin B12 deficiency-related hemolytic anemia and the need for further research into the causes and pathophysiology of vitamin B12-induced hemolysis due to its potential for fatal outcomes despite being easily treatable with cost-effective methods to treat.
Case presentation A 39-year-old female with a past medical history of laparoscopic cholecystectomy in 2021 presented with complaints of a progressive, one-month history of generalized weakness, periorbital tingling, and lightheadedness. The patient stated symptoms had been constant but exacerbated by overexertion. Her menstrual cycle was described as regular and light with no clots. She has a single 5 cm uterine fibroid stable on ultrasound last month. The patient stated she eats red meat, dairy products, and vegetables daily, denying a vegetarian or vegan diet. Social history was negative for alcohol, tobacco, and illicit drug use. The patient denied any prescribed medications or supplements aside from a daily multivitamin. Family history was negative, including colon cancer or any gastrointestinal malignancies. Review of systems was negative overall including symptoms of active bleeding, B-type symptoms, bowel changes, syncope, skin rash, myalgias, oral ulcers, and arthralgias. At presentation, the patient was hemodynamically stable and in no acute distress with vital signs within normal limits except for sinus tachycardia at 110 beats per minute, blood pressure of 115/69 mmHg, and BMI of 27. The physical examination was significant for mild subconjunctival pallor and capillary refill greater than three seconds. There were no skin or nail findings visualized, koilonychia, glossitis, gait abnormalities, or focal neurological deficits. Initial laboratory workup was significant for severe macrocytic anemia (hemoglobin 5 g/dL, mean corpuscular volume 113 fl) with mild transaminitis (AST 129 IU/L) and hyperbilirubinemia (total bilirubin 1.3 mg/dL). A decreased haptoglobin level (<10mg/dL) and elevated lactic dehydrogenase result (3,450 IU/L) raised concern for hemolytic anemia (Table 1 ). Reticulocyte percentage was elevated which indicated an appropriate bone marrow response with potential for hemolysis. Peripheral blood smear showed oval macrocytes with minimal schistocytes (Figure 1 ). Given the presentation of severe anemia with hemolysis, anemia workup continued and a vitamin B12 level resulted in very low (<150 pg/mL) with homocysteine elevated at 72.6 umol/L (normal range < 14.5 umol/L) and methylmalonic acid (MMA) elevated at 4.2 umol/mmol (normal range < 0.4 umol/mmol). Autoimmune and thyroid workups were unremarkable. Due to a lack of other etiologies from the patient’s history, pernicious anemia was suspected; therefore, intrinsic factor-blocking antibodies (IFBA) were ordered. Serum IFBA was elevated, 248 AU/mL (normal range < 1.1 AU/mL), as were anti-parietal cell antibodies, confirming the diagnosis of pernicious anemia. Gastrointestinal blood loss was ruled out via history and with an adequate iron panel. Given severe hemolytic anemia from vitamin B12 deficiency, the patient was administered two units of packed red blood cells with correction of hemoglobin to 8.5 g/dL. The patient was started on intramuscular (IM) injections of 1000 mcg of vitamin B12 daily with a plan to continue for seven days and then transition to oral supplements. The hemoglobin the day after remained stable and even improved to 9.8 g/dL. Given positive initial hemolysis labs without the presence of schistocytes, a direct antiglobulin test (DAT) was planned to further assess hemolysis but was not collected per patient refusal. With vitamin B12 repletion, the hemoglobin improved to 11.7 g/dL and the patient reported resolution of initial symptoms.
CC BY
no
2024-01-16 23:43:48
Cureus.; 15(12):e50534
oa_package/f2/ff/PMC10788660.tar.gz
PMC10788661
38226134
Introduction Subdural hematoma (SDH) is defined as the accumulation of blood between the dura mater and the arachnoid layer [ 1 ]. SDH is categorized into acute, subacute, and chronic based on the time of presentation, with acute presentation occurring within three days and chronic presentation occurring after 20 days [ 1 ]. CSDH is common, especially in the elderly population, with trauma being the most frequent risk factor [ 2 ]. Hypertension, bleeding diathesis, and the use of anticoagulants and antiplatelets are considered important risk factors as well [ 3 , 4 ]. The presentation of CSDH varies; patients may be asymptomatic or may experience a variety of symptoms such as headache, disorientation, vertigo, or seizures [ 2 ]. The definitive management technique is surgical management; however, some patients may only be observed closely or receive medical management [ 1 , 2 ]. Surgical management includes a twist-drill craniotomy and preferable burr hole evacuation due to fewer complications and a low chance of recurrence [ 2 ]. Other less invasive options include embolization of the middle meningeal artery (EMMA) [ 2 ]. Although EMMA is a safe procedure, it has some complications. Treatment failure is the most common difficulty, which might worsen the hematoma, and other complications may include neurological complications such as stroke, blindness, facial nerve palsy, aphasia, and/or pulmonary embolism (PE) [ 5 , 6 ]. Although acute-on-chronic SDH is uncommon, it has been reported in the literature [ 4 , 7 , 8 ]. Here, we describe a case of acute-on-chronic SDH presented with headache and vertigo, treated with embolization and complicated by stroke and PE.
Discussion Subdural hematoma is an accumulation of blood between the dura mater and the arachnoid layer, which could be acute, subacute, or chronic [ 1 ]. Acute-on-chronic subdural hematoma is an acute hemorrhage on a pre-existing hematoma. CSDH is one of the most common neurosurgical conditions and presents with a wide range of symptoms [ 2 ]. In this case, the patient’s main complaint was headache, which is the most frequently reported symptom of patients with CSDH [ 2 , 9 ]. The associated symptoms were gait instability and vertigo; further symptoms may include the neurological deficit, tinnitus, seizure, or nausea and vomiting [ 2 , 10 ]. Hertha et al. reported a case of a 34-year-old female who presented with acute bilateral paraplegia and urinary retention, which were due to bilateral acute on chronic SDH [ 11 ]. Although falls and head trauma represent the greatest risk factors, other issues may be involved [ 2 ]. These factors include hypertension, bleeding diathesis, the use of anticoagulants, and cerebral atrophy, which is due to aging and/or chronic alcohol consumption [ 3 , 4 , 10 ]. Alcoholism is an important risk factor, not only due to its effect on cerebral atrophy but also due to its depressant effect, which increases the probability of falls and head trauma [ 10 - 12 ]. Despite the patient's denial of any head trauma, CSDH was a possibility for him due to his age and other medical issues. Neuroimaging is crucial for the diagnosis and evaluation of CSDH; however, laboratory investigations are also important. The patient may accumulate a significant amount of blood in the subdural space before developing symptoms; therefore, a careful reading of the CBC may show a change in the concentration of hemoglobin and the hematocrit. As in this case, the patient’s CBC indicated a hemoglobin decrease, and no other bleeding sources were identified. Likewise, the accumulation of blood over time can result in consumptive coagulopathy, the first sign of which is a decreased platelet count [ 13 ]. The mainstay of the management of CSDH is surgical management. Symptomatic patients with a hematoma of 10 mm or more and/or a mass effect are candidates for surgical management [ 1 , 2 ]. The preferred option is burr-hole craniotomy, as it has the lowest risk and a low chance of recurrence [ 2 , 3 ]. In this case, EMMA was performed first; after that, the patient developed an ischemic stroke, with a thrombectomy implemented subsequently to manage the stroke with a burr hole craniotomy and drain insertion. Evidence suggests that using a drain helps further decrease the chance of recurrence [ 3 ]. Embolization of MMA is a new management option that lacks supporting evidence over surgical management; however, it is a safe option when used in conjunction with surgical management to decrease the recurrence rate [ 5 , 14 ]. Even so, it has some complications; treatment failure is the most common one, which can result in worsening of the hematoma and neurological deficits. Moreover, anatomic variation carries a risk of unintended embolization, which can result in blindness, stroke, and facial nerve palsy [ 5 , 6 ]. Gerstl et al. performed a systematic review of the complications of EMMA and discovered a 3.79% overall complication incidence with 1.33% neurological complications such as stroke, aphasia, and visual changes. Furthermore, they described 0.27% of cardiovascular complications, such as deep vein thrombosis and PE. Other complications include infections as well [ 15 ]. Despite the low chance of developing these types of complications, our patient developed neurological and cardiovascular issues in the form of ischemic stroke, aphasia, and PE. Outcomes of CSDH are poorly documented, and the patient prognosis is largely dependent on their clinical condition at presentation [ 2 ]. Earlier diagnosis and intervention have better outcomes; however, the presence of comorbidities such as heart disease and renal failure has a great impact on prognosis [ 2 , 3 ]. In our case, diagnosis and intervention were delayed, as well as the complication of embolization, all of which impacted this patient’s outcome. This presented case underscores the critical importance of recognizing and promptly addressing symptoms such as headaches and vertigo, particularly in older individuals with comorbidities. Physicians should remain vigilant when managing older patients with comorbidities, as these individuals may present with subtle yet significant neurological symptoms. Moreover, the red flags of headache and vertigo in the elderly, especially when accompanied by anemia, should be taken seriously. Therefore, delays in diagnosis and subsequent intervention can be avoided; in this case, such delays led to surgical dilemmas and complications, including ischemic stroke and pulmonary embolism. Furthermore, increasing awareness among healthcare professionals is crucial to ensuring timely intervention and minimizing the risk of complications. This case serves as a poignant reminder of the potential consequences of overlooking or delaying the diagnosis of acute-on-chronic SDH.
Conclusions To conclude, CSDH is a common neurosurgical condition; on the other hand, acute-on-chronic subdural hematoma is not common. Its presentation varies, and most of the patients present with headaches that typically worsen over the last few days. Many risk factors play a role in this condition, with head trauma being the most important as well as aging. Burr-hole craniotomy is the most effective management option because it has a low potential for complications and recurrence. On the other hand, EMMA is generally a safe procedure that has a small potential for significant complications that need to be studied more.
Acute-on-chronic subdural hematoma (SDH) is a new hemorrhage on a preexisting hematoma in the space between the dura mater and the arachnoid layer. Although chronic SDH is common, acute-on-chronic SDH is not. Herein, we present a case of a 70-year-old male with ischemic heart disease, diabetes mellitus, and hypertension who presented with worsening headaches for the past three days, associated with gait imbalance and dizziness. The patient was vitally stable on examination with a Glasgow Coma Scale/Score (GCS) of 15/15; his pupils were reactive bilaterally; and his neurological examination was unremarkable. Non-contrast computed tomography (CT) of the head yielded acute and chronic SDH. The patient was initially managed by embolization of the middle meningeal artery (EMMA), but one day later he developed a stroke. Hence, thrombectomy and burr hole craniotomy were performed to manage the stroke and evacuate the chronic subdural hematoma (CSDH). This paper presents this case as uncommon acute on chronic SDH presented with headache and vertigo, treated with embolization and with complications of stroke and pulmonary embolism.
Case presentation A 70-year-old male with type 2 diabetes mellitus, hypertension, and ischemic heart disease underwent percutaneous coronary intervention (PCI) three times. He presented to the emergency room (ER) complaining of a worsening headache for the last three days. The patient reported that he had been suffering from a headache for one month, which had become more severe over the last three days. The pain was associated with dizziness and an unsteady gait. The patient denied a history of abnormal movement, a change in the level of consciousness, weakness or numbness, nausea, or vomiting. There was no history of head trauma. The patient was on metformin for diabetes and lisinopril for hypertension and stopped taking aspirin three months ago. The patient described a history of vertigo one month ago when moving his head, for which he visited the ER. The physical examination was remarkable for high blood pressure only, and investigations were unremarkable. After stabilization, the patient was discharged from the ER. On examination, the patient looked well. He was conscious, alert, and oriented. His vitals were as follows: pulse rate, 138; blood pressure, 112/89 mmHg; oxygen saturation on room air, 99%. Upon neurological examination, the patient scored 15 out of 15 on the Glasgow Coma Scale/Score (GCS), with pupils three millimeters reactive bilaterally; cranial nerve examination was unremarkable; power was 5/5; and his coordination was intact. Blood samples were obtained for laboratory investigations, including random blood glucose, complete blood count (CBC), liver and renal function tests, and coagulation profiles. The results were significant for decreased hemoglobin at about three months, which was 13.1 g/dL, and was 8.1 g/dL at presentation; otherwise, tests were within normal limits. Other investigations included an ECG and chest x-ray, which were normal. A non-contrast head CT was performed, which indicated left-side acute on chronic subdural hematoma with mass effect (Figure 1 ). The patient was admitted to the ward with close monitoring of his vital signs. Moreover, he received two units of packed red blood cells, and his hemoglobin concentration increased to 9.4 g/dL. Afterward, the patient underwent cerebral angiography and EMMA and was moved to the intensive care unit (ICU). One day later, the patient developed right-sided weakness and aphasia. Head CT displayed left middle cerebral artery (MCA) ischemic changes and SDH progression (Figure 2 ). The patient was taken for cerebral angiography with thrombectomy of the left M2 branch of the left MCA, burr hole evacuation of the left SDH, and subdural drain insertion. He was transferred back to the ICU for further management and monitoring. A subsequent head CT indicated a return of cerebral blood flow, adequate evacuation of the left SDH, and improvement of midline shift (Figure 3 ). On further follow-up, the patient remained aphasic and developed right-side hemiplegia, for which he received rehabilitation therapy. Moreover, the patient was assessed for oral feeding tolerance by the ear-nose-and-throat team; examination revealed decreased laryngeal sensation and decreased refluxes; thus, he was unlikely to tolerate oral feeding. Furthermore, during the hospital stay, after nine days of EMMA, the patient developed a massive pulmonary embolism (Figure 4 ), and he was transferred back to the ICU. During the ICU stay, the patient was intubated for a long duration with failure to extubate; hence, a tracheostomy was performed. At discharge, the patient was aphasic, with right-sided hemiplegia and a gastrostomy feeding tube as well as a tracheostoma. All procedures have their own risks and benefits, and the physician must make their decision based on the current evidence and their experience. This patient underwent EMMA to manage the acute on chronic SDH as it has low potential for complications; however, he developed ischemic stroke and aphasia as complications of EMMA that resulted in being hemiplegic and aphasic, which worsened his outcome. Moreover, the fact that the patient underwent surgery, immobility, and a prolonged hospital stay all contribute to the development of massive PE, which even worsens the patient outcome.
CC BY
no
2024-01-16 23:43:48
Cureus.; 15(12):e50610
oa_package/0c/01/PMC10788661.tar.gz
PMC10788674
38226114
Introduction Cavernous malformations, also known as cavernomas or cavernous angiomas, are vascular lesions characterized by clusters of dilated, thin-walled blood vessels with minimal intervening brain parenchyma. While often asymptomatic, these lesions may manifest with seizures, headaches, or neurological deficits when situated in critical brain regions. Magnetic resonance imaging serves as the cornerstone for diagnosing cavernomas, revealing a characteristic "popcorn" appearance with a rim of signal loss attributed to hemosiderin [ 1 ]. However, the potential overlap in radiological features with other pathologies necessitates careful evaluation. Thrombosed aneurysms pose a diagnostic challenge due to their diverse imaging characteristics. Accurate differentiation between thrombosed aneurysms and other lesions is crucial for directing patient management and treatment strategies [ 2 ]. This case report describes a patient initially suspected to have a thrombosed aneurysm based on magnetic resonance imaging findings. However, further investigation revealed a hemorrhagic cavernoma after a normal digital subtraction angiography scan.
Discussion The presented case sheds light on the diagnostic challenges encountered in differentiating cavernous malformations from other mass lesions, such as thrombosed aneurysms. Cavernomas represent vascular anomalies characterized by clusters of dilated, thin-walled blood vessels with minimal intervening brain parenchyma. While often asymptomatic, these lesions may manifest with seizures, headaches, or neurological deficits when located in critical brain regions. The diagnostic cornerstone for cavernomas is magnetic resonance imaging, particularly utilizing gradient-echo sequences and susceptibility-weighted imaging sequences. These sequences reveal the characteristic "popcorn" appearance with a rim of signal loss due to hemosiderin deposition, providing essential insights for accurate characterization [ 1 ]. In our case, the initial suspicion of a thrombosed aneurysm based on magnetic resonance imaging findings and subsequent confirmation of a cavernous malformation through digital subtraction angiography accentuates the complexities involved in neuroimaging interpretation. The normalcy of the digital subtraction angiography results, in this case, may be attributed to the slow blood flow and low-pressure nature of the dilated, thin-walled vessels comprising the cavernous malformation. Given these hemodynamic characteristics, it is conceivable that the vessels filled too slowly to exhibit adequate contrast in the angiogram. The discrepancy between the highly suggestive magnetic resonance imaging findings and the apparently normal DSA results highlights the need to recognize the limitations of imaging modalities. The timing of contrast injection during DSA, coupled with the slow-filling dynamics of cavernous malformations, could contribute to the lack of conspicuous contrast enhancement in the angiographic study. The discussion extends to the differential diagnosis of cavernous venous malformations, which includes consideration of other cerebral vascular malformations, such as arteriovenous malformations, venous angiomas, and capillary telangiectasias. Dural arteriovenous fistulas, aneurysms, vein of Galen malformations, and hemorrhagic or calcified neoplasms are also part of the comprehensive differential considerations. Additionally, inflammatory or infectious masses, granulomas, subacute hematomas, cerebral amyloid angiopathy, hemorrhagic cerebral metastases, chronic hypertensive encephalopathy, cerebral vasculitis, and tuberculoma may present with features that necessitate careful differentiation from cavernous venous malformations [ 1 , 3 ]. Cerebral cavernomas have a spectrum of management options, primarily encompassing conservative care, microsurgical excision, and stereotactic radiosurgery. The decision-making process is intricate, taking into consideration factors such as the natural history of cavernous malformations, clinical presentation, lesion location, frequency of hemorrhagic episodes, and existing medical conditions [ 3 , 4 ]. Conservative management, involving periodic imaging and monitoring, is often chosen for asymptomatic cavernous malformations or those in low-risk locations. This approach aims to minimize potential intervention-related risks, particularly when lesions are incidentally discovered and do not cause significant neurological impairments. Microsurgical excision becomes a consideration, especially for cases linked to medically resistant epilepsy where the epileptogenic focus can be attributed to the cavernoma. The decision to opt for surgery is carefully weighed against potential benefits, considering the patient's overall clinical status and the impact of the cavernous malformation on neurological function. Stereotactic radiosurgery, while less common, is reserved for surgically challenging cases or lesions in critical brain regions, delivering focused radiation to induce changes in the cavernoma vasculature and reduce the risk of hemorrhage over time. The choice of management is individualized, reflecting the unique characteristics of each case and the patient's overall health, while ongoing research contributes to the dynamic evolution of optimal strategies in neurovascular medicine [ 5 ].
Conclusions In conclusion, this case underscores the diagnostic challenges in neuroradiology, exemplified by the initial misinterpretation of a thrombosed aneurysm, which was corrected upon further evaluation, highlighting the complexity of neuroimaging interpretation. The case also emphasizes the need to consider a broad range of differentials when faced with unexpected imaging results and stresses the importance of clinicians remaining vigilant for alternative diagnoses. The patient's favorable outcome with conservative management further supports the importance of tailored treatment strategies based on accurate diagnoses.
Cavernous malformations are vascular lesions characterized by dilated blood vessels with minimal intervening brain parenchyma. Although often asymptomatic, they can present with seizures, headaches, or neurological deficits. Accurate diagnosis relies on magnetic resonance imaging, with characteristic features such as a "popcorn" appearance. We present a case of a 45-year-old male with chronic headaches and seizures who underwent an extensive work-up. Initial magnetic resonance imaging suggested a thrombosed aneurysm, with subsequent cerebral angiography being unremarkable, supporting the final diagnosis of a cavernous malformation. Conservative management, initiated for asymptomatic lesions, led to effective seizure control and improved quality of life. This case underscores diagnostic complexities in neuroradiology, emphasizing the need for careful consideration of differentials when faced with unexpected imaging results. Clinicians must remain vigilant for alternative explanations, recognizing the dynamic nature of optimal strategies in neurovascular medicine.
Case presentation A 45-year-old male sought medical attention at the neurology clinic with a six-month history of chronic headaches characterized by throbbing pain in the left temporal region, occasionally accompanied by nausea. Despite an initial diagnosis of chronic migraines, the patient returned two months later after experiencing a witnessed generalized tonic-clonic seizure. Neurological examination findings revealed mild left-sided facial weakness (House-Brackmann Grade II) and a subtle reduction in sensation along the left side of the face. Additionally, there was evidence of a subtle left-sided pronator drift and mild dysmetria on finger-to-nose testing. Following this development, an extensive work-up was initiated. Laboratory investigations, including complete blood count, electrolytes, liver function tests, and coagulation profile, were unremarkable. An electroencephalogram revealed abnormal focal epileptiform discharges in the left temporal region. An urgent magnetic resonance imaging of the brain was ordered, revealing a lesion close to the M1 segment of the left middle cerebral artery. The initial interpretation of the magnetic resonance imaging findings raised concerns of a thrombosed aneurysm, as the lesion demonstrated high signal intensity on both T1-weighted and T2-weighted images. Notably, a gradient echo image revealed a blooming artifact, suggestive of hemorrhage within the lesion. Furthermore, the presence of a mass effect on the left cerebral peduncle was observed, emphasizing the potential impact of the lesion on adjacent brain structures (Figure 1 ). To further characterize the vascular nature of the lesion and confirm or exclude the possibility of a thrombosed aneurysm, digital subtraction angiography cerebral angiography was performed. Surprisingly, the digital subtraction angiography results were unremarkable, revealing no evidence of vascular abnormality (Figure 2 ). This unexpected outcome prompted a revised interpretation of the magnetic resonance imaging findings, attributing them to a hemorrhagic cavernoma rather than a thrombosed aneurysm. Given the absence of significant neurological deficits, a conservative management approach was chosen. The patient was initiated on antiepileptic medication for seizure control, and close monitoring was instituted. The hospital course remained uneventful, with no further seizures reported. Follow-up visits at regular intervals demonstrated effective control of headaches and seizures with the prescribed medication, leading to a significant improvement in the patient's quality of life.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50548
oa_package/c4/65/PMC10788674.tar.gz
PMC10788675
38226106
Introduction Herpes zoster ophthalmicus (HZO) showcases diverse manifestations, among which herpes zoster ophthalmicus-related ophthalmoplegia (HZORO) is a notable occurrence, affecting either the ipsilateral or contralateral eye. While studies have noted some grades of ophthalmoplegia in up to 31% of HZO cases, diplopia is relatively uncommon. HZORO primarily affects the third cranial nerve [ 1 ]. Despite the variability in presentations, ophthalmoplegia in otherwise healthy HZO cases often holds a favorable prognosis, with notable improvement typically observed over a few months. However, the effectiveness of antiviral treatment, with or without steroids, remains uncertain, lacking earlier randomized clinical trials [ 2 - 5 ]. In this reported case, a patient diagnosed with HZORO is detailed, outlining the management and course of the disease.
Discussion In HZORO, while approximately 90% of cases manifest with rashes, less than 10% exhibit diplopia [ 3 ]. The sites affected causing motility impairments encompass muscles and neurons across multiple levels, including the orbital apex, cavernous sinus, and central nervous system [ 1 ]. These impairments result from various mechanisms such as viral cytopathic effects, immune responses, and vasculitis [ 1 , 3 ]. Our case presented with ipsilateral pupil-sparing oculomotor palsy, which occurred despite the initiation of antiviral treatment. Notably, prior reports demonstrated comparable recovery rates between oral and intravenous antiviral therapies, irrespective of treatment duration surpassing 10 days or being 10 days or less [ 3 ]. The role of corticosteroids in treating HZORO remains controversial. A recent systemic review highlighted a more favorable prognosis for women, immunocompetent individuals, and those administered corticosteroids [ 3 ]. Another recently published review article showed that the complete recovery rate among immunocompetent patients with HZORO remained consistent between individuals treated solely with antivirals and those receiving a combination of antivirals and oral steroids. Nonetheless, it was noted that age might play a role in the recovery of ophthalmoplegia [ 2 ]. Additionally, a recent meta-analysis indicated that prolonged steroid therapy yielded positive effects, offering a potential avenue for improving recovery from ophthalmoplegia associated with HZO, while age, gender, and initial steroid dosage didn't notably impact recovery status [ 4 ]. These results emphasize the potential advantages of investigating extended steroid tapering as a feasible strategy in managing HZORO, urging the need for further explorations. A prior study showed that fewer than 40% of patients experienced complete recovery, leaving over 60% with persistent ophthalmoplegia. Typically, substantial improvement was noted within two months [ 5 ]. Our patient demonstrated a swift and full recovery within a month of commencing a 10-day regimen involving both antiviral and corticosteroid treatments.
Conclusions The management of HZORO remains a challenge, particularly regarding the use of systemic steroids. This case contributes to the scarce evidence suggesting that corticosteroids might significantly improve recovery in HZORO cases. The highlighted instance of third nerve HZORO demonstrated marked improvement within days and complete resolution of the palsy within a month with a short course of oral valacyclovir and steroids. This emphasizes the critical need for randomized controlled trials to conclusively ascertain the efficacy of systemic steroids in easing or abbreviating the course of HZORO. Furthermore, determining the optimal dosage and duration of corticosteroid treatment is imperative for refining HZORO management strategies.
The use of systemic steroids in managing herpes zoster ophthalmicus-related ophthalmoplegia (HZORO) remains a topic of debate. Here, a case involving third nerve HZORO is highlighted, where a regimen of oral valacyclovir followed by a brief course of oral steroids resulted in significant improvement within days and complete resolution of the palsy within a month of initiating the treatment. This case underscores the need for randomized controlled studies to definitively determine the efficacy of systemic steroids in alleviating or shortening the course of HZORO.
Case presentation An 83-year-old man came to the emergency department, complaining of a painful and erythematous skin lesion on his forehead that had persisted for a week. Initially diagnosed with erysipelas, he began a regimen of clindamycin 150 mg twice a day. One day later, his pain intensified, and vesicular lesions appeared on his forehead and around his left eye. Consequently, he was referred to an ophthalmologist, who diagnosed him with HZO. Treatment commenced with oral valacyclovir 1 gram three times daily. Ocular examination revealed no abnormalities except for a myopic fundus and glaucomatous disc, with no signs of vitreous or retinal vasculitis (Figure 1 ). Hutchinson's sign was not observed. The patient was taking bimatoprost 0.03%/timolol 0.5% ophthalmic solution (Ganfort, Allergan, Dublin, Ireland). A day later, he returned to the clinic, reporting diplopia. Examination indicated restricted adduction, elevation, and depression, coupled with ptosis of the left eyelid (Figure 2 ). Corneal sensation was normal. The pupil responded symmetrically and reactively, and no relative afferent pupillary defect (RAPD) was detected. Orbital magnetic resonance imaging (MRI) results were normal, ruling out myositis and orbital apex syndrome. Brain magnetic resonance angiography (MRA) and venography (MRV) were performed to rule out vascular involvement or cavernous sinus issues, revealing only white substance degenerative changes indicating old right parietal lobe infarcts. Additionally, 70 mg/day of oral prednisolone (1 mg per kg) was initiated for three days and tapered over seven days. Within three days of starting the steroid, the patient displayed reduced limitations in eye movements, and subjective diplopia in the primary position decreased. After one month without any further therapy, the palsy completely resolved (Figure 3 ).
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50553
oa_package/dc/67/PMC10788675.tar.gz
PMC10788676
38226132
Introduction Astrocytomas are tumors that form in the central nervous system (CNS) from astrocyte cells, which have the shape of stars. There are four grades of astrocytoma; World Health Organization (WHO) Grade 1 (pilocytic astrocytoma), Grade 2 (diffuse astrocytoma), Grade 3 (anaplastic astrocytoma), and Grade 4 (glioblastoma) [ 1 ]. Children and teens are most frequently affected by WHO grade 1 astrocytomas and are more common in males, who account for 63% of cases worldwide [ 1 ]. The common locations for Grade 1 astrocytoma include the cerebellum, spinal cord, and optic pathway [ 2 ]. The present case is of an adult female patient who presented to us with complaints of headache and several episodes of vomiting and was found to have an ill-defined, thick-walled lesion in the right parietal and temporal region with a mass effect on MRI which was histo-pathologically proven to be a WHO Grade 1 astrocytoma.
Discussion Grade 1 astrocytomas are the primary brain tumors most frequently diagnosed in children and adolescents [ 1 ]. The risk factors directly leading to the development of pilocytic astrocytomas are still unclear and mostly idiopathic [ 3 ]. The patient may present with the symptoms of elevated intracranial pressure (headache, nausea, and vomiting) as was in our case. Ataxia and cranial nerve involvement are also frequently associated with the disease [ 3 ]. Other common locations for astrocytomas include the hypothalamus, optic pathways, or brainstem. When located in the hypothalamus, the tumors may cause endocrine disturbances and might lead to diabetes insipidus, precocious puberty, or electrolyte imbalance. When present in the optic pathways, the tumor may cause loss of visual acuity or field abnormalities [ 4 ]. Optic nerve tumors are reported to be linked to radiotherapy, while brainstem tumors are related to chemotherapy [ 5 ]. The classification of these malignancies has been furthered by new diagnostic criteria based on histology and molecular characterization with isocitrate dehydrogenase (IDH) mutation status and 1p/19q codeletion status, adding new insights into prognosis and therapeutic response [ 6 ]. However, in line with the WHO, astrocytomas, oligodendrogliomas, mixed oligoastrocytomas, and ependymal tumors are the four primary categories of gliomas. These are further broken down into WHO Grades I through IV based on histological distinctions and cellularity, mitotic activity, atypical nuclei, microvascular proliferation, and necrosis based on cytology and histology [ 7 ]. One should be aware of the association of astrocytomas with many syndromes like Cowden, Turcot, Lynch, Li-Fraumeni, and neurofibromatosis type I [ 3 ]. The treatment of choice is surgical excision, and currently, a cutting-edge microsurgical technology is being used concurrently [ 8 ]. With survival rates close to 7 to 8 years following surgery, it is proven to be advantageous for low-grade cancers [ 9 ].
Conclusions In summary, this case report presents a truly unusual occurrence, a Grade 1 astrocytoma in an adult female. Such cases are exceedingly rare, challenging our conventional understanding of the demographics of astrocytoma incidence. This unique presentation underlines the importance of maintaining a broad diagnostic perspective and adaptability in the face of atypical clinical profiles. The successful management of this Grade 1 astrocytoma in an adult female reinforces the idea that individualized treatment plans are essential. While considered routine for such tumors, surgical intervention highlights the potential for positive outcomes. The exceptional nature of this case prompts the need for further investigation into the underlying factors contributing to these occurrences in this specific demographic. Ongoing research is vital in understanding the epidemiology and behavior of these tumors. In conclusion, this case report serves as a testament to astrocytomas' remarkable and uncommon aspects. It reminds us that even well-established medical knowledge can be challenged and that ongoing research is crucial to adapt to the unexpected variations in neuro-oncology.
Astrocytomas are rare in adults and less common in the parietal and temporal regions of the brain parenchyma. The current case is of a 26-year-old female patient who presented with a four-month history of headaches and a two-month history of vomiting. The patient's MRI brain showed an ill-defined, thick-walled lesion in the right parietal and temporal region with mass effect, which on histopathology confirmed to be a case of WHO Grade 1 astrocytoma. This manuscript describes the imaging and histopathological appearance of WHO Grade 1 astrocytoma in an adult female.
Case presentation The current case is of a 26-year-old female who came to the emergency department with complaints of headaches for four months and vomiting for two months. The headache was diffuse, predominantly on the right temporal region, throbbing, and severely intense. The patient had to leave all her work when the pain started. There were no aggravating factors, and she had to take painkillers to relieve the headache. Initially, the headache was not frequent and would occur once a week, but it has increased to almost once in two days. The vomiting was projectile in nature and was always associated with headache. The patient was referred to the neurology outpatient department for further management. She had no history of loss of consciousness or visual disturbance, ear, nose, and throat bleeding, or head trauma. She had normal power in both the upper and lower extremities on examination. She was advised to take blood tests and an MRI of the brain with contrast to rule out any organic cause for the above-mentioned complaints. There was an increase in the white blood cell (WBC) count and urea; the rest of the blood tests were unremarkable (Table 1 ). MRI Brain with contrast revealed an ill-defined, thick-walled, intra-axial, minimally enhancing altered signal intensity lesion with perilesional edema in the subcortical and deep white matter. The lesion showed peripheral restriction on diffusion-weighted imaging (DWI) with corresponding low signal intensity on apparent diffusion coefficient (ADC), appearing heterogeneously hyperintense on T2 weighted imaging (T2WI) and showing T2WI/fluid attenuation inverse recovery (FLAIR) mismatch, hypointense with small cystic areas on T1 weighted imaging (T1WI) with few areas of blooming on susceptibility-weighted imaging (SWI) (Figure 1 ). On magnetic resonance (MR) spectroscopy, there was an increased choline peak, decreased N-acetyl aspartate (NAA) peak, decreased choline : NAA, and lack of lactate peak at 1.3 ppm (Figure 2 ). The patient underwent right parieto-temporal craniotomy and excision of the lesion. The histopathology report confirmed pilocytic astrocytoma (WHO Grade 1) (Figure 3 ). Post-operative CT was done satisfactorily. The patient was managed conservatively. She was conscious, oriented, and vitally stable and was discharged three days after the operation with a follow-up plan after 10 days or in case of emergency.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50554
oa_package/85/4e/PMC10788676.tar.gz
PMC10788679
38221906
INTRODUCTION Genotype imputation is a cost-efficient technique to expand the number of markers in genetic studies. Li and Stephen’s hidden Markov model (HMM) and the pre-phasing-imputation pipeline are employed by the popular tools Minimac, IMPUTE and BEAGLE, with slightly different implementations but similar accuracy [ 1–5 ]. The imputation accuracy (e.g. the squared correlation between imputed dosage and true genotype; dosage r 2 ) [ 6 ], the scores generated by software without the true genotype (Rsq and INFO) [ 3 , 7 , 8 ] and the number of variants passing a predetermined Rsq threshold (high-Rsq variants) are widely used quality metrics to evaluate imputation performance [ 9 ]. This performance primarily depends on the reference panel size and genetic similarity between the reference panel and target sample [ 10 ]. In studies using the International HapMap Project (HapMap) and the 1000 Genomes Project (1KGP) [ 11 , 12 ], pooling multiple ancestries together to maximize the panel size typically improves the imputation performance compared to using only the matched ancestry [ 8 , 13 , 14 ]. Deelen et al. combined the population-specific whole genome sequencing (WGS) dataset of the Genome of the Netherlands (GoNL) with the 1KGP and showed a higher dosage r 2 than when using either GoNL or 1KGP alone [ 15 ]. Huang et al. combined the UK10K with the 1KGP to generate more variants with high INFO [ 9 ]. Moreover, larger multi-ancestry reference panels, such as the Haplotype Reference Consortium (HRC) and the Trans-Omics for Precision Medicine (TOPMed) [ 16 , 17 ], greatly improved the imputation accuracy and increased the number of high-Rsq variants in European (EUR), African (AFR) and Admixed American (AMR) [ 10 , 18 ]. However, constructing large reference panels by combining diverse ancestries is not always beneficial. Small-size population-specific panels, such as Norwegian and Estonian panels [ 19 , 20 ], have achieved similar performance to the HRC panel in EUR. Moreover, although the HRC panel includes all 1KGP samples, the imputation accuracy in non-EUR populations can be inferior to that of the 1KGP panel alone [ 10 , 21 ]. Bai et al. found that adding 27 samples from a different ancestry to the Han Chinese (1KGP-CHB, size = 103) resulted in better imputation accuracy than using the 1KGP-CHB or 1KGP panel (size = 2504) [ 22 ]. In studies of Asian populations, combining population-specific WGS datasets with the 1KGP improved the imputation accuracy in some instances [ 23–25 ], but not in others [ 26–28 ]. So far, the reason for these discrepancies is unclear. Two sets of parameters are used in the HMM-based imputation algorithms. Transition probability is governed by the template switching rate ( θ ) between adjacent markers, which models recombination and relatedness; emission probability is determined by the error rate ( ε ) for each marker, which models genotyping error, gene conversion and recurrent mutation [ 3 , 7 ]. These parameters are crucial to the HMM for calculating matching probabilities between the reference and target haplotypes during the imputation process, but dosage r 2 is robust to these parameters [ 3 , 5 ]. Many studies have suggested different Rsq or INFO thresholds to achieve a similar dosage r 2 [ 29 , 30 ], making it difficult to ascertain the relationship between the imputation process, Rsq and dosage r 2 . In this study, we imputed the East Asian (EAS) using the TOPMed reference panel (EAS comprised 1.22% of the total samples) and found that Rsq overestimated dosage r 2 , particularly in marginal-quality bins. We introduced novel variance component analysis for Rsq and analytically investigated why Rsq was overestimated and characterized the relationship between the template switching rate used in the HMM, quality metrics and imputed dosage. Furthermore, we evaluated the θ value, Rsq, dosage r 2 , the deviation between them and the number of high-Rsq variants from matched or distant ancestry in cases where the target ancestry was the major or minor component of the multi-ancestry panels.
MATERIAL AND METHODS Subjects, genotyping and quality control Cohort specifications of BioBank Japan (BBJ-180k), genotyping and quality control (QC) are provided in Supplementary Method 1 . For imputation using the TOPMed panel, we lifted the coordinates of the genotyping array from hg19 to hg38 using LiftOver [ 31 ]. Palindromic variants (reference/alternative (ref/alt) alleles were G/C or A/T) and variants not in the TOPMed freeze5b ( https://bravo.sph.umich.edu/freeze5/hg38/ ) were excluded. The ref/alt alleles were swapped and/or reverse-complemented according to the hg38 reference sequence. A total of 515 587 autosomal variants remained. Pre-phasing, reference panel construction and imputation EAGLE v2.4.1 was used for pre-phasing without an external reference [ 32 ]. The 1KGP (p3v5) reference panel was downloaded from https://genome.sph.umich.edu/wiki/Minimac3 . Methods used to construct the population-specific panels (named the BBJ1k and JEWEL3k) are provided in Supplementary Method 2 . We used an in-house server to perform imputation using the 1KGP, BBJ1k and JEWEL3k panels. Minimac3 v2.0.3 was used to estimate the HMM parameters and prepare m3vcf files [ 3 ]. Minimac4 v1.0.2 was used for imputation [ 33 ]. Imputation using the TOPMed and HRC panel were performed using the TOPMed and Michigan imputation servers [ 3 , 17 ], respectively. Evaluation of the imputation performance using different reference panels Rsq from the Minimac4 info file was extracted for the subsequent analyses. Variants with Rsq <0.3 were removed [ 34 ]. Imputation performance was empirically evaluated using 993 samples with WGS (named WGS 993 ). QC and processing are described in Supplementary Method 3 . Coverage of a reference panel was the fraction of variants in WGS 993 that could be imputed with Rsq ≥ 0.3. Dosage r 2 was the squared Pearson correlation coefficient between the imputed dosage and true genotype (encoded as 0, 1 or 2). We grouped the variants into dosage r 2 bins with a 0.05 interval and bootstrapped each bin 1000 times to obtain the 2.5% and 97.5% quantiles (95% confidence interval; CI) of Rsq − dosage r 2 . Minor allele frequency (MAF) and minor allele count (MAC) of WGS 993 were used to stratify variants into bins. Quantification of the deviation between Rsq and dosage r 2 We modeled the relationship between the imputed allelic dosage ( , ) and the true allele ( , ) using the simple linear regression formula: where β imp is a scalar of the regression coefficient with , β 0 is the intercept and is an error term. The Minimac EmpRsq metric, which is the squared Pearson correlation coefficient between x and y ( https://genome.sph.umich.edu/wiki/Minimac3_Info_File ), equals the ratio between the regression sum of squares ( SS reg ) and the total sum of squares ( SS tot ) in this simple linear regression. Consequently, we have where Cov( x , y ), Var( x ) and Var( y ) are the covariance between x and y , the variance of x and the variance of y , respectively. Rsq is the ratio between Var( y ) and , where p is the alternative allele frequency (AAF) in the imputed dataset [ 3 , 7 ]. Then, where SS res is the residual sum of squares that follows , and n is the number of imputed haplotypes. Hence, Rsq comprises two parts: regression related and residual related. We define as M AF- A djusted- R esidual- E rror (MARE). Then: By assuming the equal AAF in x and y , i.e. , Rsq could be further treated as Finally, MARE and could be obtained from Rsq and EmpRsq: In Equation ( 7 ), is negative if x and y are negatively correlated. We did not consider that situation. Hence, each combination of Rsq and EmpRsq indicates specific values of MARE and . Hereafter, we refer to as the β imp metric and MARE as the MARE metric. Because the WGS dataset comprises unphased diploid data, dosage r 2 calculated from that will be slightly different from EmpRsq. Supplementary Note 1 provides the detailed methods to calculate MARE and β imp from haploid or diploid data, and the concordance between values obtained using Equations ( 6 )–( 7 ) and calculated from the imputed dosage in both haploid and diploid cases. Imputation using the simulated 1KGP reference panels with different θ values The 1KGP (p3v5) vcf files were downloaded from the Minimac3 website (see above). Parameters were estimated using Minimac3. The θ value was extracted (denoted as ‘Recom’ in the m3vcf file) [ 3 ], scaled by 21 folds manually (0.01–100) and replaced in the original file. A brief explanation of the θ value is provided in Supplementary Note 2 . The array data of WGS 993 , a subset of BBJ-180k, was used as the target sample. The imputation procedure was the same as described above. Evaluation of the imputation performance using the simulated reference panels We obtained the imputed allelic dosage (LooDosage) from the Minimac4 empiricalDose file (by turning the ‘--meta’ option on) [ 35 ]. It was derived from the leave-one-out method by hiding markers on the array during the imputation. Rsq, EmpRsq, MARE and β imp were calculated from the LooDosage and array data. We bootstrapped the metric values 1000 times to obtain the 95% CI. MAF was obtained from the array data. Simulation of reference panels and estimation of the θ value We sampled subsets from the 1KGP and JEWEL3k, shuffled the sample order 10 times and created new vcf files using bcftools v1.14 ( https://samtools.github.io/bcftools/ ) [ 36 ]. We estimated the parameters using Minimac3. Supplementary Method 4 provides the detailed methods. The total θ value along chr19 (by summing the θ values between adjacent markers) was used to evaluate the reference panel size and ancestral diversity impact. A discussion of the θ value qualification is provided in Supplementary Note 3 . EUR-EAS reference panel simulation and imputation We randomly sampled 403 individuals (named 1KGP-EUR 403 ) from the 1KGP-EUR and combined them with the 1KGP-EAS and 6 subsets (size = 500, 1000, 1500, 2000, 2500 and 3256) of the 3256 JPT WGS samples (named as JPT 3256 ) in JEWEL3k. The remaining 100 individuals in the 1KGP-EUR were used as the target sample. We extracted the 10 375 polymorphic variants (chr19) on the Illumina Global Screening Array v3.0 ( https://support.illumina.com/content/dam/illumina-support/documents/downloads/productfiles/global-screening-array-24/v3-0/infinium-global-screening-array-24-v3-0-a1-b151-rsids.zip ) to simulate the genotyping array. Parameter estimation and imputation were the same as above. JPT-1KGP reference panel simulation and imputation We sampled seven subsets (size = 100, 500, 1000, 1500, 2000, 2500 and 3256) from the JPT 3256 and combined the JPT 3256 with six subsets (1KGP-JPT and one to five ancestries) of the 1KGP. WGS 993 was used as the target. The other processing methods were the same as above. Number of confident alleles and high-Rsq variants Haploid dosage (HDS) is the imputed alternative allele’s allelic dosage at the haploid level, which could be obtained using the ‘--format HDS’ option in Minimac4. Briefly, HDS is obtained in a per-variant and per-individual manner, and a higher value indicates that an imputed alternative allele is more certain. We quantified the θ value’s impact on the imputed dosages by the number of confident alleles (HDS > 0.9). Rsq was used to judge how many high-Rsq (Rsq > 0.7) variants could be passed to the downstream analyses. Furthermore, we determined how many confident alleles and high-Rsq variants could be obtained only from the additional distant ancestries in the multi-ancestry reference panel. We defined ancestry-specific variants as follows: EUR-only variants only existed in the 1KGP-EUR 403 , and it has no non-EUR variants. JPT 3256 -only and 1KGP-EAS-only variants only existed in the JPT 3256 and 1KGP-EAS, respectively. Non-EAS variants were not found in the JPT 3256 or 1KGP-EAS. Replication of the TOPMed imputation pipeline We followed the TOPMed imputation pipeline (accessed on 15 October 2022, https://topmedimpute.readthedocs.io/ ) and used the 1KGP and JEWEL3k reference panels. Specifically, the HapMap2 genetic map was used as a reference for the θ value instead of estimating it using Minimac3. WGS 993 was used as the target. As the θ value was not output by default, we modified the Minimac4 source code to obtain the transformed θ value ( Supplementary Method 5 ).
RESULTS B‌BJ imputation using the four reference panels The BBJ-180k was imputed using the TOPMed, 1KGP, BBJ1k and JEWEL3k reference panels ( Figure 1A ). Characteristics of each panel and the target sample are listed in Table 1 . We categorized the imputed variants by MAF and Rsq in each imputed dataset. With more EAS samples, more low-frequency variants (MAF < 5%) passed each Rsq threshold ( Figure 2A and Supplementary Table 1 ). In addition to the absolute number, unique variants were imputed from each panel ( Figure 2B ). These results reproduced the benefits of using large and different reference panels [ 19 ]. We then used WGS 993 to empirically evaluate the imputation performance. MAF and MAC of WGS 993 were used to categorize the variants into six bins: common (MAF ≥ 5%), low-frequency (0.5% ≤ MAF < 5%), rare (0.1% < MAF < 0.5%), doubleton (MAC = 2) and singleton (MAC = 1). For common single nucleotide variants (SNVs), the TOPMed imputation result showed the highest coverage (0.950), followed by 1KGP (0.948), JEWEL3k (0.907) and BBJ1k (0.890). This loss of variants in JEWEL3k and BBJ1k were possibly due to the additional QC steps used while combining the 1KGP with Japanese WGS ( Supplementary Method 2 ). As MAF decreased, the coverage also decreased; however, more EAS samples in the reference panel mitigated this decrease, as expected ( Figure 2C ). The 155 297 shared SNVs (on chr19) imputed by the four panels were used to compare imputation accuracy. The mean dosage r 2 using the TOPMed panel was between those of 1KGP and BBJ1k ( Figure 2D ). However, Rsq showed 1.83–26.2 times higher upward biases when dosage r 2 was <0.85 in the TOPMed imputation result ( Figure 2E and Supplementary Table 2 ). Using all SNVs and short insertions and deletions (indels) with Rsq ≥ 0.3, although with higher coverage, the TOPMed imputation result showed an inferior dosage r 2 than 1KGP ( Supplementary Figure 1 ). Quantifying the deviation between Rsq and dosage r 2 The deviation between Rsq and dosage r 2 was persistent in all MAF bins only when using the TOPMed panel ( Supplementary Figure 2 ), which indicated a potential systematic bias from the imputation pipeline or the reference panel. To validate this observation, we analytically derived the relationship between Rsq and dosage r 2 (Methods). Two novel metrics, MARE and β imp , were introduced to quantify the deviation. MARE, an MAF-adjusted form of residual error, takes a value between 0 and 1, and increases with SS res . β imp describes the distinguishability between the mean imputed dosage of each true genotype group. Rsq is the ratio between observed and expected variance and dosage r 2 shows the correlation. Under the assumption of ‘well-calibration’ (the posterior allele probability from imputation equals the expected true allele dose), Rsq equals dosage r 2 [ 10 ]. Our analytical derivation of the relationship between Rsq and dosage r 2 did not assume ‘well-calibration’ ( Supplementary Figures 3 – 7 and Supplementary Note 1 ). The relationship between the imputed dosage and the true genotype determined the four metrics. Any two of Rsq, dosage r 2 , MARE and β imp could entirely quantify this relationship and determine the other two metrics (Equations ( 6 )–( 7 )), whereas Rsq or dosage r 2 alone could not. To exhibit the relationships, we plotted the theoretical values of MARE and β imp on the coordinates of Rsq and dosage r 2 ( Figure 3A and F ) to show that the overestimated Rsq was accompanied by a higher MARE ( Figure 3A ). We used rs142572000 as an example ( Figure 3B–E ). In the TOPMed imputation result, imputed genotypes were more certain (defined as the imputed dosage closer to 0, 1 or 2) ( Figure 3B ) [ 37 ] compared to the other three panels ( Figure 3C–E ). The high certainty increased Var( y ) and Rsq. In Supplementary Note 4 , the positive relationship between imputed-genotype certainty and Rsq is demonstrated. As discussed below, a high certainty or Rsq did not mean that the imputation is more accurate. As shown in Figure 3B , many heterozygotes were incorrectly imputed with a dosage of approximately 0 in the TOPMed imputation, causing higher MARE and Rsq and an even lower dosage r 2 . Variants with Rsq < dosage r 2 showed a lower β imp ( Figure 3F ). We used rs671 as another example ( Figure 3G–J ). Except for the TOPMed imputation result, imputed dosages were shrunk to the AAF in EAS (0.247) ( Figure 3H–J ), causing uncertainty in the imputed genotype, and decreased β imp , MARE and Rsq. However, imputed dosages were still highly correlated with the true genotypes ( Figure 3F ), causing dosage r 2 > Rsq. Taking the two examples together, imputation results of a similar Rsq or dosage r 2 may have a great difference in the imputed dosage; thus, more comprehensive evaluations are necessary and the deviation between Rsq and dosage r 2 should not be ignored. Supplementary Note 5 describes the selection criteria for rs142572000 and rs671. To evaluate the generalizability, we presented the three other examples (rs1047781, rs113230003 and rs7624610) reported by genome-wide association studies and compared Rsq − dosage r 2 to the imputed-genotype certainty and showed that in dosage r 2 bins 0.6–1, the imputed-genotype certainty increased as the deviation increased ( Supplementary Note 5 , Supplementary Tables 3 – 4 and Supplementary Figure 8 ). We categorized MARE into Rsq bins and β imp into dosage r 2 bins to compare them between reference panels and to the expected values when assuming Rsq equals dosage r 2 . The mean MARE of the TOPMed result was above the expected value for Rsq bins 0.35–0.9, and the mean β imp of the 1KGP result was below those of the other panels ( Supplementary Figure 9 ). As demonstrated above and in Supplementary Notes 4 – 5 , these metrics could reveal the imputed dosage distribution. Thus, these findings indicated that the TOPMed result was more certain at particular dosage r 2 bins and might contain more wrongly imputed genotypes compared to that expected from a deviation-free imputation result ( Figure 3B and G ). Template switching rate impacts the deviation between Rsq and dosage r 2 The high certainty suggests an overconfident matching between the reference panel and target sample. We investigated how the template switching rate ( θ ) used in the HMM affects the imputed-allele certainty using the 1KGP reference panel and 21 scalings of the θ value (0.01–100-fold of that estimated by Minimac3; Methods; Figure 1B ). Low θ values caused a deviation toward Rsq > EmpRsq from the 45-degree line when comparing the variants on chr19 (EmpRsq is the alternate of dosage r 2 ; Methods) ( Supplementary Figure 10A–C ), while high θ values caused Rsq < EmpRsq ( Supplementary Figure 10E–G ). Using rs10410162 as an example ( Supplementary Note 6 describes the selection criteria), Rsq and MARE increased as the θ value decreased ( Figure 4A ), with increasing the certainty of the imputed alleles ( Supplementary Figure 11A–C ) and deviation toward Rsq > EmpRsq ( Figure 4B ). As the θ value increased, the imputed allelic dosages were shrunk to the AAF (0.351) ( Supplementary Figure 11E–G ), resulting in a more drastic decrease in Rsq than EmpRsq ( Figure 4A ) and thereby causing EmpRsq > Rsq. Notably, EmpRsq and β imp were roughly maintained unless the θ value was scaled up by 2-fold or higher ( Figure 4A and C ), suggesting EmpRsq was insensitive to altering the θ value, particularly the downscaling. In contrast, Rsq was sensitive to the θ value and consistently increased with downscaling of the θ value, leading to the deviation between Rsq and EmpRsq. Such observations were verified using the same variants (rs1047781, rs113230003 and rs7624610) mentioned above ( Supplementary Note 6 , Supplementary Table 5 and Supplementary Figure 12 ). Furthermore, the relationship between the deviation and the imputed-allele certainty was verified using all variants on chr19 ( Supplementary Note 6 and Supplementary Table 6 ). We also showed that low θ values increased the imputed-allele certainty using a randomly selected target haplotype ( Figure 4D and Supplementary Figure 13 ). High certainty might cause some variants to be wrongly imputed as the opposite allele ( Supplementary Figure 14 ). This is explained further in Supplementary Note 7 . Such imputed-dosage properties were also revealed by the mean MARE and β imp across all variants on chr19 ( Supplementary Figure 15 ). Taking the metric values and imputed dosages together, the simulated imputation using an extremely low θ value mimics the high certainty and Rsq overestimation in the TOPMed imputation result in EAS ( Figure 3 ). Template switching rate impacts the imputation performance Using the modified 1KGP panels, although the mean EmpRsq was insensitive to the scaling of the θ value (maximum difference of 0.034 for the θ value scaling between 0.01- and 2-fold; Figure 5 ), low θ value increased Rsq ( Figure 5 ). We evaluated the number of confident alleles (HDS > 0.9) and high-Rsq variants (Rsq > 0.7) (Methods). Downscaling the θ value increased the number of confident alleles and high-Rsq variants ( Supplementary Table 7 ). When the θ value was 0.5-fold, the number of confident alleles and high-Rsq variants increased by 5.46 and 8.89%, respectively, and if the θ value was 2-fold, these numbers decreased by 11.8 and 13.8%, respectively. These results indicated that the θ value shaped the imputed allelic dosage and changed Rsq and the number of high-Rsq variants while leaving EmpRsq almost the same. Reference panel and θ estimates We evaluated how Minimac3’s parameter estimation changed with the composition of the reference panel ( Supplementary Method 4 , Supplementary Figure 16 , and Supplementary Note 3 ). The θ estimates decreased with the sample size of a single ancestry and increased with ancestral diversity when size was fixed ( Figure 6A and B ). On pooling samples from different ancestries together at a small panel size, the θ estimates still decreased as the panel size increased ( Figure 6C ); however, at a larger panel size, it increased with simultaneously increasing the panel size and ancestral diversity ( Figure 6D ), suggesting that the θ estimates were in a trade-off between the panel size and ancestral diversity ( Figure 6E ). Supplementary Note 8 explains these effects. Fitness of the θ value, deviation and imputation performance To elucidate the multi-ancestry reference panel’s impact on the imputation result, we simulated two scenarios using the multi-ancestry reference panels: the target sample was from the (1) minor and (2) major ancestry ( Figure 1C ). Scenario 1: the target sample was from the minor ancestry We simulated 8 EUR-EAS reference panels and used 100 EUR samples as the target (Methods). As the panel size increased and θ value decreased ( Table 2 ), Rsq and MARE were upwardly biased, as expected ( Supplementary Figures 17 and 18 ). The mean EmpRsq decreased consistently with the addition of EAS samples to the reference panel ( Table 2 ). However, only a marginal difference was observed (maximum difference of 0.020). There were 114 606 EUR-only and 696 017 non-EUR variants (Methods). As the panel size increased and θ value decreased, the number of confident alleles increased from 84 797 to 101 116 and from 0 to 458 for EUR-only and non-EUR variants, respectively ( Supplementary Table 8 ). The number of high-Rsq variants increased from 163 468 to 200 161, 18 646 to 27 941 and 0 to 521 for all, EUR-only and non-EUR variants, respectively ( Supplementary Table 8 ). These results revealed that a lower θ value increased the number of confident alleles and high-Rsq variants, without improving the EmpRsq ( Supplementary Table 8 ). Meanwhile, even if the reference panel comprised 90.3% EAS samples, non-EUR variants only comprised 0.26% of the total variants in the imputation result (when setting a cutoff of Rsq > 0.7). The majority of variants gained from the larger panel when imputing the under-represented ancestry was because of the lower θ value. Scenario 2: the target sample was from the major ancestry We simulated 13 JPT-1KGP reference panels and used WGS 993 as the target (Methods). When only JPT samples were in the panel, the mean EmpRsq and Rsq increased with the panel size, as expected ( Supplementary Figure 19 ). When combined with the 1KGP subsets, the mean EmpRsq was highest when using JPT 3256 + 1KGP-EAS for variants with MAF ≥ 1% and JPT 3256 + 1KGP-JPT for variants with 1% > MAF ≥ 0.5% ( Table 3 ). Adding other ancestries decreased the mean EmpRsq marginally (maximum difference < 0.01 for all MAF categories). The mean Rsq was the highest when using JPT 3256 and decreased with the addition of more ancestries, with maximum differences of 0.007, 0.024 and 0.048 for variants with MAF ≥ 5%, 5% > MAF ≥ 1%, and 1% > MAF ≥ 0.5%, respectively ( Table 3 ). None of the imputation results showed a noticeable deviation in the MARE and β imp . However, the MARE and β imp of the JPT 3256 , JPT 3256 + 1KGP-JPT and JPT 3256 + 1KGP-EAS results were closer to the expected values ( Supplementary Figure 20 ), while adding other ancestries made MARE and β imp correspond to the condition of using a higher θ value. There were 74 490 JPT 3256 -only, 7627 1KGP-EAS-only and 495 489 non-EAS variants (Methods). From JPT 3256 to JPT 3256 + 1KGP, the number of confident alleles decreased from 24 599 to 23 666, increased from 0 to 131 and increased from 0 to 254 for these three groups of variants, respectively ( Supplementary Table 9 ). The number of high-Rsq variants decreased from 274 343 to 264 221 (a decrease of 3.69%) when using JPT 3256 or JPT 3256 + 1KGP ( Supplementary Table 9 ). Only 265 non-EAS variants reached an Rsq > 0.7 when using JPT 3256 + 1KGP, which was 0.10% of the total variants. Hence, combining with the 1KGP caused fewer confident alleles and high-Rsq variants to remain in the imputation results, as expected from using higher θ values. Both scenarios indicated that the distant ancestry in the reference panel affected the θ estimates and the number of high-Rsq variants. However, the expanded haplotypes and variant sets from distant ancestries provided only a few additional variants. EAS imputation using the public reference panels We used the TOPMed imputation pipeline (Methods) and observed an upward deviation of Rsq, particularly when using the 1KGP panel ( Supplementary Figure 21 ). The θ value transformed from the genetic map was 0.15- and 0.27-fold of that estimated by Minimac3 when using the 1KGP and JEWEL3k panels, respectively ( Supplementary Table 10 ). As the θ estimates decreased with the size of the major ancestry ( Figure 6C ), it would be underestimated for the minor ancestry in the multi-ancestry panel regardless of using the genetic map or estimating it by the Mimimac3. One example was the HRC panel, which did not use the genetic map but induced Rsq overestimation in the BBJ-180k ( Supplementary Figure 22 ). Thus, caution may be required when imputing the under-represented population using large ancestry-imbalanced panels under the current framework.
DISCUSSION The deviation between Rsq and dosage r 2 in the imputation result is raised from the imputed dosage, and has been widely observed using different reference panels and software [ 9 , 15 , 38 ]. One reason for this observation was that the θ value used in the HMM does not correspond among the panel size, ancestral components in the reference panel and target population. When using the multi-ancestry reference panel, distant ancestries affect the θ estimates (in Minimac3/4), followed by the imputed dosage, dosage r 2 , Rsq and deviation. Moreover, the addition of distant ancestries only contributes a few high-Rsq variants, suggesting that these reference haplotypes have been assigned a low probability in the HMM. The subsetting of closely related samples from the reference panel has been adopted by the IMPUTE software; however, IMPUTE has been reported to produce an INFO score deviating from dosage r 2 , indicating that the construction of single-ancestry reference panel is still the optimal choice under the current HMM framework. Our simulations indicate that the lower θ value used by a large multi-ancestry panel could increase the imputed-genotype certainty but not dosage r 2 . As Rsq is a measurement of certainty and is not related to the true genotype, using a multi-ancestry panel may lead to confusing benchmarking results and increase the chance of false positives in association tests. Ferwerda et al. reported height and body mass index association signals in an ethnically diverse cohort only when imputing against the TOPMed panel [ 39 ]. Bai et al. reported the highest Rsq but lowest dosage r 2 when imputing the Han Chinese using the HRC panel, compared to the 1KGP and a population-specific panel [ 22 ], similar to our observations in the Japanese population. Dosage r 2 determines the association test power compared to using the true genotype [ 10 ]. When using Rsq as an indicator of dosage r 2 , which indicates the power, the deviation may affect the optimal choices of downstream analyses [ 40 ]. Beside dosage r 2 and Rsq, imputation has been reported to cause variability in genetic score calculations [ 41 ], which may further affect downstream analyses that require genotype aggregation, like polygenic score estimation and transcriptome-wide association studies [ 42 ]. Such implications suggest that a thorough examination of the imputation result is warranted when using the multi-ancestry reference panel. In addition to simply comparing dosage r 2 or Rsq, we have detailed the changes in these metrics, θ estimates, imputed-dosage certainty and the number of high-Rsq variants passing to downstream analyses. Our findings were further validated and confirmed by replicating all analyses using SNPs on chromosome 20 ( Supplementary Note 9 and Supplementary Tables 11 – 20 ). Furthermore, we have provided a script ( https://github.com/shimaomao26/impumetric ) to allow users to check their results using the leave-one-out imputation of Minimac4 (external WGS not required) or additional WGS data. The TOPMed imputation pipeline used the θ value transformed from the HapMap2 genetic map by assuming 0.01 centimorgan (cM) corresponds to a 1% switching rate. The switching rate would thereby be fixed given the variant’s base pair position and centimorgan. Two reasons may explain why this method works well. First, the genetic map did not significantly change the imputation accuracy, as verified in a Finnish study [ 43 ]. We also showed that dosage r 2 was insensitive to the θ value. Second, the θ value was only underestimated for the EAS (size = 1184), but the sizes of EUR, AFR and AMR (size = 17 085–47 159) might fit this value. Previous studies have reported that Rsq in the TOPMed imputation result was sometimes misleadingly high in AFR [ 38 ] and EUR [ 44 ]. A recent paper reported that the TOPMed panel was more robust to the low-density genotyping array than the HRC and 1KGP panels [ 18 ]. These results also indicated that a fixed low θ value might be used by the TOPMed pipeline, as we have inferred. Other imputation tools, such as IMPUTE and BEAGLE, use fixed recombination rates predetermined by the HapMap2 genetic map. Although we did not explicitly evaluate different tools in this study, the INFO score has overestimated dosage r 2 in multiple studies [ 9 , 29 , 30 ], indicating that the relationship between this parameter and the imputation result is not software or reference panel specific. Further studies are warranted to comprehensively elucidate the impacts on genetic studies. Our results suggest that avoiding ancestral diversity is best when more than 3000 WGS samples are available to construct a JPT reference panel. Increased diversity would then only have a marginal impact on dosage r 2 but would affect the individual’s imputed dosage and cause fewer variants to pass a predetermined Rsq filter. Zhang et al. and Cong et al. similarly observed that adding the 1KGP to about 3000 Chinese WGS samples would neither benefit nor harm dosage r 2 [ 26 , 27 ]. We focused on explaining the reason underlying this observation in this work. Further studies are warranted to study the algorithm implementation and parameter estimation and to take advantage of combining population-specific and public WGS datasets while avoiding our identified problems. Our study has several limitations. First, our simulations of the reference panel were study specific. As discussed, the θ estimates decreased with the panel size and increased with the ancestral diversity. However, a larger panel typically increases the diversity simultaneously. Therefore, the θ value and imputation result need to be discussed case by case. This limitation also implies that the general experience may not work well for a new reference panel and target dataset. Second, in simulations of multi-ancestry reference panels (scenarios 1 and 2), WGS datasets were merged using IMPUTE2 [ 9 ]. We did not investigate the impact of IMPUTE2 but treated the merged dataset similar to the WGS dataset, possibly, underestimating the number of high-Rsq variants from the distant ancestry ( Supplementary Note 10 ). Our results using the 1KGP (not modified by IMPUTE2) also revealed that only a limited number of confident alleles and high-Rsq variants in the imputation result were from the distant ancestry ( Supplementary Note 10 and Supplementary Table 7 ). Therefore, the conclusion that distant ancestry in the reference panel affects the θ estimates rather than providing high-Rsq variants would be valid. However, further studies should be conducted to determine the impact of the panel-merging method and the net gain of variants from distant ancestries. Third, we did not consider the phasing error in the reference panel and target sample. Further studies are warranted to evaluate the HMM parameter’s impact on phasing accuracy and the different combinations of pre-phasing and imputation method. In summary, we explained that the HMM parameter could be a potential reason for inaccurate Rsq and inferior dosage r 2 when using large multi-ancestry reference panels. This is also the first study of the relationship between the template switching rate, imputed-genotype certainty, Rsq and dosage r 2 . We envision that our methods and conclusions could provide insights for benchmarking studies, construction of reference panels, and development of imputation algorithms and pipelines in the future.
CONCLUSION The relationship between the reference panel, the template switching rate ( θ value), the imputed dosage, and the deviation between Rsq and dosage r 2 are summarized in Table 4 . When a multi-ancestry reference panel is used, dosage r 2 is insensitive to a range of θ values, while Rsq increases with a lower θ value. This could create a deviation between Rsq and dosage r 2 . For under-represented populations in large multi-ancestry reference panels, the majority gain of additional variants with high Rsq is not from the additional reference haplotypes but is rather because of the low θ value used and Rsq overestimation. On the other hand, for the major ancestry in the reference panel, the high θ value causes fewer variants to remain in the Rsq-filtered dataset. Our findings suggest utilizing only the matched single-ancestry reference panel, and avoiding benchmarking using only Rsq or dosage r 2 .
Abstract Large-scale imputation reference panels are currently available and have contributed to efficient genome-wide association studies through genotype imputation. However, whether large-size multi-ancestry or small-size population-specific reference panels are the optimal choices for under-represented populations continues to be debated. We imputed genotypes of East Asian (180k Japanese) subjects using the Trans-Omics for Precision Medicine reference panel and found that the standard imputation quality metric (Rsq) overestimated dosage r 2 (squared correlation between imputed dosage and true genotype) particularly in marginal-quality bins. Variance component analysis of Rsq revealed that the increased imputed-genotype certainty (dosages closer to 0, 1 or 2) caused upward bias, indicating some systemic bias in the imputation. Through systematic simulations using different template switching rates ( θ value) in the hidden Markov model, we revealed that the lower θ value increased the imputed-genotype certainty and Rsq; however, dosage r 2 was insensitive to the θ value, thereby causing a deviation. In simulated reference panels with different sizes and ancestral diversities, the θ value estimates from Minimac decreased with the size of a single ancestry and increased with the ancestral diversity. Thus, Rsq could be deviated from dosage r 2 for a subpopulation in the multi-ancestry panel, and the deviation represents different imputed-dosage distributions. Finally, despite the impact of the θ value, distant ancestries in the reference panel contributed only a few additional variants passing a predefined Rsq threshold. We conclude that the θ value substantially impacts the imputed dosage and the imputation quality metric value.
Web resources Impumetric, https://github.com/shimaomao26/impumetric TOPMed imputation server, https://imputation.biodatacatalyst.nhlbi.nih.gov/#! Michigan Imputation server, https://imputationserver.sph.umich.edu/index.html Minimac3, https://genome.sph.umich.edu/wiki/Minimac3 Minimac4, https://genome.sph.umich.edu/wiki/Minimac4 EAGLE, https://alkesgroup.broadinstitute.org/Eagle/ Bcftools, https://samtools.github.io/bcftools/ LiftOver, https://genome.ucsc.edu/cgi-bin/hgLiftOver Supplementary Material
Acknowledgements We thank the staff of the BBJ for collecting and managing samples and clinical information. We acknowledge the Human Genome Center, the Institute of Medical Science, the University of Tokyo ( http://sc.hgc.jp/shirokane.html ) and the Digital Research Alliance of Canada for providing super-computing resources. Funding Ministry of Education, Culture, Sports, Sciences and Technology (MEXT) of Japanese government and the Japan Agency for Medical Research and Development (AMED) under grant numbers JP18km0605001 (the BioBank Japan project) and JP19km0405215 (to C.T., K.M. and Y.K.). Data availability A tool to replicate the metrics used in this work as well as other scripts are available at https://github.com/shimaomao26/impumetric . Genotype data for BBJ and the TOPMed imputation results were deposited at NBDC Human Database (research ID: hum0014 and hum0311, respectively). Author Biographies Mingyang Shi is a PhD candidate at the University of Tokyo, Japan. Chizu Tanikawa is an associate professor at the University of Tokyo, Japan. Hans Markus Munter is a research associate at McGill University, Canada. Masato Akiyama is a lecturer at Kyushu University, Japan. Satoshi Koyama is a research fellow at Broad Institute, USA. Kohei Tomizuka is a senior technical scientist at RIKEN Center for Integrative Medical Sciences, Japan. Koichi Matsuda is a professor at the University of Tokyo, Japan. Gregory Mark Lathrop is a professor at McGill University, Canada. Chikashi Terao is the team leader of the Laboratory for Statistical and Translational Genetics in the RIKEN Center for Integrative Medical Sciences, Japan. Masaru Koido is an assistant professor at the University of Tokyo, Japan. Yoichiro Kamatani is a professor at the University of Tokyo, Japan. Appendix Imputation quality metrics Rsq: Standard quality metric in Minimac. The ratio between observed variance and expected variance of imputed allelic dosage. INFO: Standard quality metric in IMPUTE. The ratio between observed information and complete information of imputed-genotype distribution. Dosage r 2 : The squared Pearson correlation between the imputed dosage and true genotype. EmpRsq: The squared Pearson correlation between the imputed allelic dosage and true allele dose. Provided by Minimac and is only available for the genotyped markers on array. MARE: The minor allele frequency adjusted residual error in linear regression between the imputed dosage and true genotype. β imp : The regression coefficient between the imputed dosage and true genotype.
CC BY
no
2024-01-16 23:43:49
Brief Bioinform. 2024 Jan 13; 25(1):bbad509
oa_package/d5/23/PMC10788679.tar.gz
PMC10788680
38221903
INTRODUCTION Single-cell RNA sequencing (scRNA-seq) associates gene expression data with an individual cell in a sample. Cellular heterogeneity in RNA transcripts is critical to answer questions for disease development and treatment. Therefore, it is no surprise of the growing enthusiasm to explore the unique transcriptomic profile of each cell by scRNA-seq for cell type identification. Unlike bulk RNA-seq, the expression of genes from scRNA-seq is highly sparse due to limited sequencing depth per cell, which increases the chance of model overfitting and hinders downstream analysis [ 1 ]. The observed zeros can either be a true gene expression level or they are the result of methodological noise. Furthermore, gene expression data measured via transcriptomic profiling is high-dimensional profile data. Compared with relatively small sample size, the high-dimensional gene expression from scRNA-seq is subject to the curse of dimensionality. The simplest and effective way to deal with data sparsity and the curse of dimensionality is to increase the sample size [ 2 ], which requires access to a large amount of diverse datasets. Although aggregating single-cell gene expression datasets can bolster sensitivity and robustness of cell type identifications [ 3 ], the adoption of dataset aggregation is hampered by privacy regulations since it is often restricted by them. As highlighted by Ferguson [ 4 ], the field of bioinformatics, which includes scRNA-seq data, is fraught with ethical and privacy concerns. The potential exists for individuals to be identified through their genetic data, and sharing and transferring personal genetic data might lead to possible exposure of sensitive health information. Therefore, rigorous regulations, such as General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act (HIPAA), have been developed to regulate the process of accessing and analyzing such data. Given these constraints, privacy-preserving federated learning solutions play a vital role in helping researchers to aggregate and explore single-cell gene expression datasets without sharing the original data of each institution. To collaboratively learn a shared cell type identification model while keeping all the training data locally, a growing number of federated learning approaches are being adapted to automatically label cells in scRNA-seq experiments. PriCell was proposed as a federated neural network learning approach for disease-associated cell classification [ 5 ]. scPrivacy utilized federated deep metric learning algorithms to train the federated cell type identification model on multiple institutional datasets in data privacy protection manner [ 6 ]. PPML-Omics [ 7 ] analyzed data from three sequencing technologies with a privacy-preserving federated framework, clustering cell populations with Auto-encoder and k-means clustering. Those approaches align with the growing emphasis on ensuring data privacy in bioinformatics, as underscored by the challenges and concerns presented in the field. Federated-learning-based scRNA-seq classification methods are relatively new compared with traditional scRNA-seq classification methods, but they still have a common goal to accurately annotate cells without private information leakage. Various machine learning approaches, such as neural network, support vector machine (SVM) and gradient boosting machine, have been utilized to identify cell types [ 8 ]. Moreover, Transformer-based models have emerged as an additional choice for cell type identification, leveraging their self-attention mechanisms to capture complex cellular patterns [ 9 , 10 ]. While UniFed demonstrated that the selection of classification methods is the main factor that affected model performance in federated learning frameworks [ 11 ], the absence of a detailed comparative study of classification methods in federated learning for scRNA-seq leaves users without clear guidelines to select the optimal method tailored to their specific challenges within this framework. Here, we propose scFed as a unified federated learning framework to benchmark a range of classification methods, providing researchers with a systematic guide to conduct scRNA-seq analysis while ensuring data privacy. Our study employed both general-purpose and single-cell-specific classifiers for cell type identification with scRNA-seq. While SVM, and XGBoost served as general-purpose classifiers, ACTINN [ 12 ] tailored specifically for scRNA-seq data. The selection of SVM and XGBoost was based on their established efficacy across a range of datasets [ 8 ], whereas ACTINN was incorporated for its expertise in scRNA-seq data. Eight publicly available scRNA-seq of different sizes, species and technologies were employed for performance comparison. The performance of federated-learning-based classification methods was evaluated based on their accuracy and computation time. We performed several experiments covering different aspects of federated learning and classification tasks, such as datasets, client numbers and algorithm comparison. We also integrated Geneformer [ 10 ], a Transformer-based model, into the scFed framework to evaluate its potential for cell type identification. While it exhibited promising classification capabilities, it also posed significant computational demand. Thus, we benchmarked its performance against other classification methods, focusing on both accuracy and computation time, to provide a comprehensive view for this task. Our experiments revealed considerable variations in classification performance and computation time across different classification algorithms and our assessment demonstrated scFed’s effectiveness, time efficiency and robustness for privacy-preserving integration of multiple client datasets.
METHODS To quantitatively evaluate the federated learning framework for single cell classification, we propose scFed to integrate several single cell classification algorithms to the federated learning framework. System overview We summarize the workflow of scFed’s system as shown in Figure 1 . It is a federated learning framework that allows a global model to be trained using decentralized data scattered among a large number of different clients, without uploading client data to servers. Essentially, the framework assumes the existence of activated clients, each possessing their own dataset denoted by . Our objective is to develop a cell type identification model that incorporates the datasets from clients. Classification algorithm Prioritizing data privacy through the adoption of the federated learning framework, this workflow supports four key classification algorithms: neural networks, tree-based models, SVMs and Transformer-based models, which are also fundamental algorithms for single cell classification. In this section, we will discuss the federated-learning-based classification algorithms in detail. Neural network (ACTINN) With the strong ability of learning high level features from data, deep learning networks do not need the domain knowledge to select features, which is beneficial for the classification of a huge number of cells. A variety of deep learning models have been explored to identify cell types [ 13 ]. A recent cell classification method, ACTINN [ 12 ], employs a fully connected neural network for cell type classification. In this study, we integrate ACTINN to scFed by training neural network models with a variant of FedAvg algorithm [ 14 ] as shown in Algorithm 1. The variant of FedAvg algorithm, as presented in Algorithm 1, is a federated learning approach designed for efficient communication in deep learning networks with decentralized data. The algorithm begins by initializing the global model weights ( ) and iterating through communication rounds. In the original FedAvg algorithm, each client performs epochs of local stochastic gradient updates on their local data before sending an updated model to the server. By setting to 1, each client only performs a single pass over their data, which is the same to the one weight updated round in a centralized setting, thus making it more comparable with a centralized training process for scFed’s performance evaluation. During each round, the server selects all clients to participate in computation. The server then sends the global model weights ( ) to the clients. Each client fine-tunes its local model by performing a training epoch over its local dataset ( ) and computes the updated model weights ( ). Once completed, the clients send their updated weights back to the server. Finally, the server aggregates the updated weights from all clients, updating the global model weights ( ) based on the weighted sum of the local weights, considering the relative size of each client’s dataset. This process is repeated for a predefined number of communication rounds, ultimately converging to an effective model that has learned from the decentralized data. Support vector machine SVMs [ 15 ] have gained significant popularity as a classification method in recent years, owing to its strong theoretical foundation, high accuracy and robustness to overfitting. SVM has been extensively applied to classify gene expression data measured on scRNA-seq data [ 16 ], addressing the challenges of high dimensionality, sparsity and noise inherent in such data. In the context of federated learning, constructing a federated SVM with mathematical rigor is essential to ensure effective collaboration and privacy preservation. For this purpose, we specifically focus on the linear SVM, as it provides a more straightforward and computationally efficient approach compared with nonlinear kernels while still offering satisfactory performance. The linear SVM model can be represented as the coefficient for the orthogonal vector to the hyperplane and the intercept of this vector, which define the decision boundary for classification. In a federated learning setup, each participating client holds a local dataset and trains a linear SVM model independently, without sharing the raw data. The locally trained models, consisting of weight vectors and intercepts, can then be aggregated and converted into a global model [ 17 ]. This is achieved by computing a weighted average of the local models with the weights determined by the relative size of each client’s dataset. The resulting global model captures the collective knowledge from all participating clients while preserving data privacy. Tree-based model (XGBoost) Both single decision tree and ensemble decision tree models, such as gradient boosting decision trees (GBDT) [ 18 ] and random forests, could be learned via federated learning. Owing to GBDT’s excellent performance in classification applications, it has been widely used for single cell classification [ 19 ]. In this work, we train an XGBoost model via a federated learning framework to avoid leak of client data privacy. The framework of federated XGBoost training is implemented with four steps in each communication. Firstly, the server sends the initial parameters or the new tree to the clients. Secondly, the clients update the gradient histogram separately. Thirdly, the clients send the gradient histogram to the server. Finally, the server merges the histogram and boosts a new tree [ 20 ]. Transformer-based model (Geneformer) Transformer-based models are capable of analyzing vast datasets of single-cell transcriptomic data [ 9 , 10 ]. Its capability to capture long-range dependencies in data points makes it as an effective tool for exploring biological information in computational contexts. A Transformer-based model called Geneformer [ 10 ] is pretrained on a large corpus of around 30 million single-cell transcriptomes to facilitate context-specific predictions. Utilizing Geneformer for classification tasks, it is necessary to fine-tune it with a specific dataset. Within scFed, the fine-tuning of Geneformer primarily comprises the following four steps. Firstly, each client loads the pretrained model. Secondly, clients conduct local training to fine-tune the parameters. Thirdly, each client sends the model parameters of this round of fine-tuning to server. Finally, the server aggregates and disseminates the model parameters to all the clients.
RESULTS We evaluated our proposed scFed in terms of model accuracy, scalability with the number of clients, classification algorithm feasibility and runtime analysis. The generalizability of our federated learning workflow was tested through both intra-dataset and inter-dataset classifications. Given the computational intensity of Transformer architectures, we conducted a specialized evaluation of Geneformer. Using the Zhengsorted dataset as a benchmark, we evaluated Geneformer with both classification accuracy and runtime metrics. Datasets A total of eight scRNA-seq datasets were used to evaluate and benchmark scFed with all classification methods, from which all datasets were used for intra-dataset evaluation, and five datasets were used for inter-dataset evaluation. Datasets vary across sequencing protocols, species and tissue ( Table 1 ). Five pancreas datasets sequenced with different sequencing protocols were used. BaronHuman [ 21 ], Muraro [ 22 ], Segerstolpe [ 23 ] and Xin [ 24 ] datasets are all from the human pancreas, and BaronMouse [ 21 ] is from mouse pancreas. Zhengsorted [ 25 ] are sequenced from human peripheral blood mononuclear cells. AMB dataset [ 26 ] is from the allen mouse brain. The Tabula Muris (TM) dataset represents relatively large scRNA-seq dataset (>50 000 cells)[ 27 ]. More details about datasets were described in [ 13 ]. In our intra-dataset classification, we randomly split the entire dataset into 80% as training data and 20% as test, and evenly distribute the training set over clients. We used the same training and test datasets for each set of comparative experiments. In our inter-dataset classification study, first, we combined the four human pancreas datasets (Xin, BaronHuman, Muraro and Segerstolpe) and then used three of them as the training dataset and the remaining one as the test dataset. In this case, each client holds a training dataset, and they are integrated to train scFed. For data preprocessing, we filtered genes with zero counts across all cells. To remove the influence of technical effects while preserving true biological heterogeneity, a CPM [ 28 ] normalization was applied for count depth scaling. Next, the data of gene expression were log-transformed using log2(count + 1). Implementation The FL framework is implemented in python and utilizes socket programming to establish connections between server and clients, thereby maintaining privacy and efficient use of distributed data. The system is powered by two Intel(R) Xeon(R) Platinum 8358 CPUs running at 2.60GHz with 128 threads on 64 cores and 512 GB RAM. In Geneformer setting, the experiments were equipped with two Hygon C86 CPUs at 2.00 GHz with 128 threads on 64 cores, 128 GB RAM and NVIDIA Tesla A40 GPU. We use the TensorFlow library to construct the federated version of ACTINN. The structure of the global neural network model is saved and replicated across clients by a federated server. Subsequently, model parameters are extracted from the local models and sent to the server for aggregation. For neural-network-based algorithm ACTINN, we used initial learning rates of 1e-4 with adam optimization for each client, and a mini-batch of size 128 was sampled from the dataset. We choose the scikit-learn library, which provides comprehensive tools and functions for SVM models, to access the weights of an SVM model and perform the model aggregation while ensuring the preservation of data privacy across all clients. Linear SVM is configurated by SGDClassifier in scikit-learn with hinge loss function. For XGBoost model, we use XGBoost [ 18 ], an open-source software library, to implement federated XGBoost. Federated XGBoost is a gradient boosting library for the federated setting which enables multiple parties to jointly compute a model while keeping their data on site, avoiding the need for a central data storage. Analogous to communication rounds, we set the number of rounds for boosting to 30. For the implementation of Transformer-based model in scFed, we use Geneformer [ 10 ] to implement federated Geneformer. Leveraging the pretrained weights from Geneformer, we perform fine-tuning by appending a task-specific transformer layer and use the trainer provided by the Huggingface Transformers library [ 29 ]. Consistent hyperparameters are applied for fine-tuning: max learning rate , linear scheduler with warmup, Adam optimizer with weight decay fix, warmup steps 500, weight decay 0.001 and batch size 4. Statistical analysis We conducted a Wilcoxon signed-rank test, which tested if there was significant difference in accuracy between the two models. By assessing the P -value of the Wilcoxon statistical test, we were able to establish whether the null hypothesis (i.e. no significant difference between the global and centralized models) could be rejected in favor of the alternative hypothesis (a significant difference between the two models). A P -value greater than 0.05 indicated that we lacked sufficient evidence to reject the null hypothesis. In the following reports, the P -values for comparison between the corresponding boxplots are shown as an interval. The thresholds are represented in a ‘start’ format as [1e-4, ‘****’], [1e-3, ‘***’], [1e-2, ‘**’], [0.05, ‘*’], [1, ‘ns’]. Benchmarking federated learning for single cell classification (Intra-dataset evaluation) In this experiment, we evaluated the scFed performance by training and testing subsets of cells included in the same scRNA-seq data. We named this an intra-dataset evaluation. The comparisons were made by reporting results from the following scenarios: (a) A centralized training data model for cell type identification, which served as our baseline. (b) Individual client trains its own local model without collaboration, with the average F1-score of all local models reported. (c) scFed is applied to obtain global models for cell type identification. For all experiments, we maintain a fixed test datasets to assess the centralized, local and global model performance with five independent repetitions performed to determine the classification results. Performance evaluation across different datasets We utilized eight datasets over clients to assess the performance of global models implemented via scFed in comparison with local and centralized models for cell type identification tasks. Each boxplots in Figure 2 collectively displayed the F1 scores of cell type classification with algorithms of SVM, ACTINN and XGBoost. Drawing from the Wilcoxon signed-rank test results, as illustrated in Figure 2 , we observed no significant statistical difference in performance between our global models, trained using scFed, and traditional centralized models across all evaluated datasets. This demonstrates that scFed’s global model not only matches the performance of the centralized model but does so while maintaining crucial data privacy considerations. It also suggests that scFed can be a viable alternative for single-cell classification tasks while maintaining data privacy. Furthermore, global models outperform local models except for Xin dataset, which is the smallest one in our study. This indicates that federated learning can effectively aggregate and learn from the information distributed across multiple clients, leading to a more accurate and robust global model. Performance evaluation across different classification algorithms In this experiment, we also conducted thorough comparisons of centralized, local and global models to understand the effect of classification algorithms on the cell type identification performance. Each boxplots in Figure 3 collectively displayed the F1 scores of cell type classification for eight datasets as shown in Table 1 . In this set of experiments, we still fixed the client number to be five. Figure 3 shows that the highest performance for cell type classification is attained by the centralized SVM model. The global SVM model demonstrates a performance level statistically indistinguishable from that of the centralized SVM model. Both global SVM model and centralized SVM model outperfoms corresponding ACTINN and XGBoost models, whose statistical significance was confirmed using a Wilcoxon signed-rank test. In all cases, global models surpass their local counterparts, regardless of the classification algorithm employed. When comparing ACTINN and XGBoost, no significant performance differences were observed between their centralized and global models. This finding highlights that both algorithms provide comparable results in the context of scRNA-seq cell type classification. Performance evaluation over different numbers of clients In this section, we investigated the impact of varying the number of clients on the performance of scFed. We systematically varied the number of clients participating in the federated learning process among 2, 5, 10 and 20. Figure 4 shows the scalability of scFed with the number of clients. Each boxplots collectively displayed the F1 scores of cell type classification with algorithms of SVM, ACTINN and XGBoost gathering from eight datasets as shown in Table 1 . Figure 4 demonstrates that scFed yields comparable performance with centralized models and significantly surpasses local models in most client number setting of this experiment, except for . When evaluating classification performance across various client quantities within scFed, we noticed no substantial difference between and . However, a rise in client numbers to 20 resulted in a drop in classification performance for both global and local models. According to the results of the Wilcoxon signed-rank test, significant differences are apparent among global models as client numbers increase, with the exception of the comparison between and . Performance evaluation across datasets (Inter-dataset evaluation) In order to examine the generalization of the proposed model, we evaluated the performance of cross-dataset classification, which is a more realistic scenario. Since the Xin, BaronHuman, Muraro and Segerstolpe datasets are all from the human pancreas, we used these four datasets for the validation. Common cell types among these four datasets are alpha, beta, delta and gamma, so we extracted the four cell types from each dataset for combination. Five independent repetitions were performed to determine the classification results. Before combining the datasets, we preprocessed the data using CPM normalization and log-transformation. We then standardized each dataset by min–max scaling to make the four datasets on the same level. We conducted four experiments. In each experiment, three of the four datasets were used as the training dataset and one was left as the test dataset. This approach allowed us to assess the model’s generalization capabilities across different datasets while maintaining a consistent evaluation setup. Figure 5 revealed no significant statistical difference between the centralized and global models’ performances, as evidenced by the Wilcoxon signed-rank test results. This lack of difference underscores the effectiveness of scFed as a model that can perform comparably with centralized models without compromising the privacy of local original data. However, a striking improvement was observed when comparing the global models with local ones. The global model, which integrated information from multiple datasets, demonstrated significantly superior performance in comparison with local models. This enhancement indicates the power of scFed to harness shared information across multiple clients, thereby enhancing the performance in a real-world, heterogeneous data scenario. These results substantiate the scFed model’s potential as a valuable tool in federated learning, capable of effectively generalizing across diverse datasets. Runtime comparison In this section, we provided a comprehensive analysis of the runtime performance of the SVM and ACTINN models, which we have specifically developed for the federated learning framework, referred to as the global model (scFed). This evaluation also includes comparisons with their local and centralized counterparts. Please note that this study primarily emphasizes the models we have tailored and implemented for the federated learning environment, thus we have chosen not to include a runtime comparison for the federated XGBoost model, which is an externally sourced implementation. To make a fair runtime comparison among global, centralized and local models, we standardized training iterations for comparative pipelines. Specifically, we fixed the number of training iterations to 100. In the case of the global model (scFed), 100 communication rounds were performed, with each round iterating over the entire dataset once. For the centralized and local models, they iterated over the training dataset 100 times during the training process. We performed five independent runs employing the Zhengsorted dataset for the global, centralized and local model, and noted the training time. We reported the average of those five training times in Table 2 . We observed that the global training time for the SVM models, varying the number of clients from 2, 5, 10 to 20, is approximately two to three times longer than the centralized training time. The local training time was slightly longer than the centralized one. In the case of ACTINN, as shown in Table 2 , the global training time in the scFed framework exhibits a 3-fold increase relative to the centralized model, excepting for the scenario with 20 clients. The time difference between the centralized and local models for ACTINN was negligible. This analysis underscores the computational performance of scFed in cell type classification tasks, illuminating its computational efficiency is in an accessible range, especially considering the scale and complexity of the task. In our detailed exploration of runtime complexities, five independent measurements were taken on the server, focusing on parameter aggregation time and communication time for sending receiving model parameters. As shown in Table 3 , both aggregation and communication times grow gradually with the increase in client numbers. This indicates a linear scaling in terms of both the time taken to integrate model parameters from additional clients and the communication overhead. However, potential complexities introduced at the threshold of lead to a deviation from a linear trend. Notably, given ACTINN’s larger parameter set, it inherently requires more time than SVM, highlighting the time complexity associated with handling more advanced models. Benchmarking of federated Geneformer for cell type identification In this section, we benchmarked the federated implementation of Geneformer for cell type identification, contrasting its performance with SVM, ACTINN and XGBoost in the framework of scFed. The evaluation, as reported in Table 4 , includes centralized, local, and global configurations using the the Zhengsorted dataset. In the centralized setting, Geneformer achieved the best average F1 score of 0.905. However, in both local and global contexts, ACTINN outperformed with F1 scores of 0.879 and 0.902, respectively. Notably, Geneformer’s training duration considerably exceeded its counterparts. Even in the centralized scenario, its training time was still two orders of magnitude greater than the other evaluated models. Classification models in the Zhengsorted dataset highlighted XGBoost’s faster runtime in local and centralized setups, possibly attributed to its boosting algorithm, optimization strategy and efficient processing.
DISCUSSION In this study, we assessed the utility of scFed, a federated-learning-based framework, to perform cell type classification with scRNA-seq. A wide range of scenarios were simulated to assess the efficacy of scFed in handling diverse challenges associated with single-cell data analysis, including considerations for privacy preservation, robustness to dataset heterogeneity and scalability to handle massive datasets. Within the scFed framework, we incorporated federated adaptations of advanced classification algorithms for single-cell identification, including SVM, neural network, tree-based model and Transformer-based model. In our experiments, scFed exhibited robustness in cell type classification, matching traditional centralized models while ensuring data privacy. The global models outperformed local ones through aggregating information across multiple clients. Though Transformers are widely applied in various tasks, it is essential to recognize that the Geneformer model demands a significant training time and encompasses a large number of parameters. This raises concerns about the necessity of utilizing such a heavyweight model for this task. The trade-off between accuracy, model size and training time requires careful consideration. Exploring alternative models or methods with simpler architectures and shorter training times while retaining commendable accuracy could offer a more efficient solution for cell type classification. Despite these promising findings, scFed does have some limitations. First, the increase in training time with the rise in client numbers could potentially limit its scalability for extremely large-scale applications. Moreover, while scFed showed robustness across different classification algorithms, the effectiveness of federated learning could be further explored with other algorithms, such as deep regression forests, or more advanced federated learning techniques. Importantly, the absence of raw data sharing does not eliminate privacy concerns. Shared model updates can inadvertently leak information, allowing potential adversaries to infer individual data attributes. Future directions should integrate scFed with privacy-preserving techniques such as trusted execution environments [ 30 ] or secure multi-party computation [ 31 ] to further strengthen scFed’s privacy guarantee. Regarding the wider application of scFed, while its effectiveness has been demonstrated in the context of scRNA-seq analysis, its federated learning-based framework offers immense potential for other tasks in bioinformatics. Given the prevalent challenges of data privacy and the need for multi-institutional collaboration in bioinformatics, scFed’s capabilities could be leveraged to enable secure, privacy-preserving analysis across a range of biological data types.
Shuang Wang and Bochen Shen equal contributions Abstract The advent of single-cell RNA sequencing (scRNA-seq) has revolutionized our understanding of cellular heterogeneity and complexity in biological tissues. However, the nature of large, sparse scRNA-seq datasets and privacy regulations present challenges for efficient cell identification. Federated learning provides a solution, allowing efficient and private data use. Here, we introduce scFed, a unified federated learning framework that allows for benchmarking of four classification algorithms without violating data privacy, including single-cell-specific and general-purpose classifiers. We evaluated scFed using eight publicly available scRNA-seq datasets with diverse sizes, species and technologies, assessing its performance via intra-dataset and inter-dataset experimental setups. We find that scFed performs well on a variety of datasets with competitive accuracy to centralized models. Though Transformer-based model excels in centralized training, its performance slightly lags behind single-cell-specific model within the scFed framework, coupled with a notable time complexity concern. Our study not only helps select suitable cell identification methods but also highlights federated learning’s potential for privacy-preserving, collaborative biomedical research.
ACKNOWLEDGMENTS This work was funded by ‘Pioneer’and ‘Leading Goose’R&D Program of Zhejiang (No. 2022C01126, Dr. Sun and Prof. Wang), and National Key R&D Program of China (2023YFF0905305, Dr. Sun, 2021YFC2500802 and 2021YFC2500806, Prof. Wang and Dr. Zheng), and supported by the National Natural Science Foundation of China (No. 32270690 and No. 32070671, Prof. Shen). AUTHOR CONTRIBUTIONS STATEMENT S.W. and Q.S. conceived the project. J.L. and M.S. provided critical feedback on the analysis of experiment results. L.G., BC.S. and Q.S. implemented the algorithms. Q.S. and BR.S. supervised the project and provided guidance. Q.S., BC.S. and S.W. wrote the manuscript in consultation with B.S and J.L. DATA AVAILABILITY The scRNAseq data are downloaded from https://doi.org/10.5281/zenodo.3357167 CODE AVAILABILITY The current version of scFed is implemented in python and can be found at https://github.com/digi2002/federatedSinglecell . Author Biographies Shuang Wang is affiliated with the Institutes for Systems Genetics, West China Hospital, and is also the co-founder of Hangzhou Nuowei Information Technology Co., Ltd, Hangzhou, China. Bochen Shen is a research associate at Hangzhou Nuowei Information Technology Co., Ltd, located in Hangzhou, China. His expertise includes federated learning, deep learning and bioinformatics. Lanting Guo is an algorithm expert at Hangzhou Nuowei Information Technology Co., Ltd, located in Hangzhou, China. His expertise includes federated learning, deep learning and distributed computing systems. Mengqi Shang is an algorithm engineer at Hangzhou Nuowei Information Technology, Hangzhou, China. Her expertise includes federated learning. Jinze Liu is a professor at Department of Biostatistics, Virginia Commonwealth University, USA. Her primary research focus lies in the field of bioinformatics, specifically concentrating on the comprehensive analysis of multi-omics data to unravel intricate biological insights and patterns. Qi Sun is an algorithm scientist at Hangzhou Nuowei Information Technology, Hangzhou, China. Her research interest is bioinformatics, NLP and federated learning. Bairong Shen is a professor and executive director at Institutes for Systems Genetics, West China Hospital, Sichuan University. His research interest is biomedical informatics, genetics and medical systems biology. Appendix scRNA-seq data exhibit significant cell-to-cell variation due to biological heterogeneity and technical effects. In our inter-dataset experiments, we conducted experiments among different datasets that included both biological and technical variables. To solely evaluate the impact of biological factors within the federated learning framework, we introduced a dataset that exhibits significant biological variability. We conducted supplementary experiments using Lee’s dataset (GEO GSE149689) [ 32 ]. Lee’s study performed scRNA-seq using PBMCs to identify factors associated with the development of severe COVID-19 infection. From this dataset, we randomly selected two samples: PBMCs from a healthy donor and a patient with COVID-19. Our comparative experiment involved two distinct cell compositions: the first, termed patient-agnostic experiment, mixed cells from both samples and distributed them across two clients; the second, referred to as the patient-specific experiment, assigned cells from each individual sample to separate clients. As shown in Table A1 , SVM’s performance is relatively stable across all scenarios, with a slight decrease in the global setting compared with the centralized one in patient-specific experiments. Contrastingly, ACTINN, XGBoost and Geneformer show greater declines in the global scenario when applied to patient-specific data. In the patient-agnostic experiments, all the models show smaller reductions in global models compared with their patient-specific counterparts, with Geneformer in particular demonstrating improved global performance compared with its patient-specific results. Our findings reveal that mixing cells from both patient samples improves global model performance, underscoring the benefit of sample diversity in our federated learning framework.
CC BY
no
2024-01-16 23:43:49
Brief Bioinform. 2024 Jan 13; 25(1):bbad507
oa_package/e0/24/PMC10788680.tar.gz
PMC10788688
38226088
Introduction Hepatic hemangiomas are the most common benign tumor of the liver, with an incidence of up to 20% based on autopsy studies [ 1 ]. Most hepatic hemangiomas are detected incidentally, present no signs or symptoms, and require no treatment [ 2 , 3 ]. Treatment is indicated when a hepatic hemangioma causes symptoms, compresses adjacent organs (gastric outlet obstruction, Budd-Chiari syndrome), ruptures into the intraperitoneal space, or causes Kasabach-Merritt syndrome [ 3 , 4 ]. The size of the lesion and the growth speed of hepatic hemangiomas are considered important factors in determining the indication for surgery [ 4 ]. Some authors have argued that a hepatic hemangioma >10 cm in diameter should be called a giant hemangioma [ 5 - 7 ], while others used as a 10 cm diameter hepatic hemangioma as an indication for surgery [ 6 - 10 ]. Therefore, 10 cm can be used a cut-off value for the diameter of a hepatic hemangioma. Several studies have evaluated the natural history of hepatic hemangiomas [ 11 - 16 ]. However, none of these studies has focused on hepatic hemangiomas measuring >10 cm in diameter. Knowing the natural history of hepatic hemangioma >10 cm in diameter would be helpful in determining a treatment. Hepatic hemangiomatosis is a rare condition in which the liver parenchyma is replaced with hemangiomatous lesions [ 17 ]. However, hepatic hemangiomatosis was reported to be commonly observed in patients with a large hemangioma (>8 cm) [ 17 ]. The presence and extent of hepatic hemangiomatosis affect the surgical technique of resection of a large hepatic hemangioma and are clinically important [ 17 ]. Thus, the natural history of hepatic hemangiomatosis is also important in determining a treatment for hepatic hemangioma. However, the natural history of hepatic hemangiomatosis remains largely unknown [ 18 , 19 ]. This study aimed to evaluate the natural history of hepatic hemangiomas >10 cm in size. In addition, this study aimed to evaluated the natural history of hepatic hemangiomatosis in patients with hepatic hemangiomas >10 cm.
Materials and methods Patients Computed tomography (CT) and magnetic resonance imaging (MRI) reports at Kyoto University Hospital, Kyoto, Japan, between January 2001 and March 2023 were electronically searched for cases of hemangiomas. Adult patients (≥18 years of age) with hepatic hemangiomas with a maximum diameter of >10 cm on axial imaging were included. For each patient, the baseline study was defined as the oldest study (CT or MRI) of the upper abdomen, and the final study was defined as the latest study (CT or MRI) of the upper abdomen during follow-up in patients who did not undergo treatment, such as surgery and embolization, for hepatic hemangiomas. When a hemangioma was initially small but increased to more than 10 cm in diameter during follow-up, the first imaging study indicating the hemangioma with a maximum diameter of >10 cm was defined as the baseline study. The follow-up period was defined as the period between the baseline and the final study. Patients with a follow-up period longer than six months were included in this study. Patients with a history of hepatic hemangioma treatment were also excluded. This retrospective study was approved by the Ethics Committee of Kyoto University Graduate School and Faculty of Medicine (approval no. R3936). Diagnosis of hepatic hemangiomas and hepatic hemangiomatosis The diagnosis of hepatic hemangiomas was confirmed using dynamic contrast-enhanced CT or dynamic contrast-enhanced MRI of the abdomen, which demonstrated peripheral nodular enhancement on the arterial phase, followed by centripetal filling of the lesion on the delayed phase. The presence of hepatic hemangiomatosis was evaluated using dynamic contrast-enhanced CT or fat-saturated T2-weighted MRI of the abdomen. Hemangiomatosis was defined as the presence of diffuse geographic or innumerable confluent small nodular enhancements with poorly defined margins on arterial-phase CT. The enhancement had to become more homogeneous or show filling in on delayed-phase CT. On fat-saturated T2-weighted MR, hemangiomatosis was defined as the presence of diffuse geographic or confluent innumerable high-intensity small nodular signals with poorly defined margins. Hemangiomatosis was also classified as diffuse or localized. When hemangiomatosis spread throughout the liver, it was classified as diffuse. When hemangiomatosis spared some areas of the liver, it was classified as localized. Image evaluation A board-certified radiologist (Y.O.) with 12 years of experience in liver imaging confirmed the diagnosis of hemangioma based on the imaging findings and evaluated the other imaging findings. The location of the hepatic hemangioma was classified as right or left. When the hemangioma was located mainly to the right of the round ligament of the liver, it was classified as right, and when it was located mainly to the left of the round ligament, it was classified as left. In addition, in the baseline study, the maximum diameter of the hemangioma was measured on axial images. In the final study, the maximum diameter of the same lesion was measured on the axial images. The maximum diameter of the hemangioma in the final study was compared with that in the baseline study, and the change in hemangioma size was classified into three groups: enlargement, >120%; no change, 80-120%; and shrinkage, <80%. The growth rate of the hepatic hemangioma was defined as ((maximum diameter on final study)-(maximum diameter on baseline study))/(follow-up period). Subsequently, the median growth rate was calculated. The presence of hepatic hemangiomatosis was evaluated in all patients in the oldest studies (MRI or dynamic CT) of the upper abdomen. When hemangiomatosis was observed, the oldest studies (MRI or dynamic CT) and the latest studies (MRI or dynamic CT) of the upper abdomen were compared during the follow-up period. Changes in hemangiomatosis were classified as enlargement, no change, or shrinkage. When hemangiomatosis occupied more than 1.5 of the area in the latest study compared to the oldest study, it was defined as an enlargement of hemangiomatosis. Hemangiomatosis that occupied less than half of the area in the latest study compared with the oldest study was defined as shrinkage of hemangiomatosis. If the change was not classified as an enlargement or shrinkage, it was classified as no change. In patients without hepatic hemangiomatosis, the number of hepatic hemangioma was counted and classified as single or multiple. The clinical course of each patient was checked by reviewing electronic medical records. The symptoms and signs of hepatic hemangiomas were assessed. Treatment of hepatic hemangiomas was also recorded. CT and MRI techniques CT images were obtained using a 4- to 320-detector row CT scanner (Aquilion, Aquilion One, Aquilion Prime; Canon Medical Systems Corporation, Japan). Our standard dynamic abdominal CT protocol included unenhanced, arterial, and delayed-phase images. The contrast agent was administered at a dose of 600 mgI/kg. A bolus-tracking system was used to obtain images at an appropriate time. The region-of-interest cursor for bolus tracking was placed over the aorta at the level of the celiac axis, and the trigger threshold was set at 200 HU. Arterial phase images were obtained 23 seconds after the trigger, and delayed-phase images were obtained 80 seconds after the trigger. Five-mm or 7-mm slice thickness images were used for image evaluation. MRI examinations were performed using a 1.5- or 3-T system (Magnetom Avanto, Magnetom Prisma Fit, Magnetom Skyra, Magnetom Sola, and Magnetom Trio Tim, Siemens Healthcare; Genesis Signa, General Electric Medical Systems). Breath-hold fat-saturated T2-weighted fast spin-echo or turbo spin-echo sequences were obtained. Multiphase contrast-enhanced breath-hold T1-weighted gradient-echo sequences were obtained before and after contrast medium injection. Gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid (Gd-EOB-DTPA) (Primovist, Bayer Schering Pharma, Berlin, Germany) and gadolinium-based extracellular contrast agents (Omniscan, General Electric Healthcare, Illinois, USA) were used. Gd-EOB-DTPA contained 0.25 mmol gadolinium (Gd)/mL and 0.025 mmol Gd/kg body weight was administered. Gd-based extracellular contrast agents contained 0.5 mmol Gd/mL and 0.1 mmol Gd/kg body weight was administered. All contrast agents were injected at a rate of 1 mL/s, followed by a 10 mL saline flush. A bolus-tracking technique was used to obtain images at an appropriate time. The arterial and delayed phases were obtained at 0 and 80 seconds after the detection of contrast in the abdominal aorta.
Results Patients A total of 37 patients were identified. Of these, seven patients were excluded because they underwent surgery (n = 6) or transarterial embolization (n = 1), and the follow-up period was less than six months (n = 6). Furthermore, two more patients were excluded because of a previous history of treatment for hepatic hemangiomas. Thus, 22 patients (17 women and five men) comprised the study population, with a median age of 51 years at the baseline study. In all patients, dynamic contrast-enhanced CT or dynamic contrast-enhanced MRI of the liver was performed, and a diagnosis of hepatic hemangiomas was confirmed. The reasons for hepatic hemangioma detection were incidental imaging findings (n = 8), medical checkups (n = 6), evaluation of a malignant neoplasm or follow-up of a malignant neoplasm (n = 3), and not available (n = 5). None of the patients had hepatitis B or C virus infections. At the time of the baseline study, 20 patients were asymptomatic, while two patients complained of abdominal distention. At the time of the baseline study, the Child-Pugh score was five in 18 patients, six in two patients because of hypoalbuminemia, and seven in one patient because of moderate ascites. In one patient, no blood tests were performed, and the Child-Pugh score could not be calculated. Imaging findings The imaging findings and clinical courses of all patients are shown in Table 1 . Hepatic hemangiomas were detected on the right and left sides of the liver in 16 and six patients, respectively. The median maximum diameter of the hepatic hemangiomas at the baseline study was 114 mm (interquartile range (IQR): 103 mm to 170 mm). The median follow-up period was 95.5 months (IQR: 50 to 150 months). Enlargement, no change, and shrinkage of the hepatic hemangiomas were observed in six, 11, and five patients, respectively (Figures 1 - 3 ). The median growth rate of hepatic hemangiomas was 2.5 mm/year (IQR: −2.2 mm/year to 4.7 mm/year). Hemangiomatosis was observed in 15 patients in the oldest studies: the localized form in 13 patients and the diffuse form in two patients. During the follow-up period, two or more dynamic contrast-enhanced CT or MRI studies of the abdomen were performed in 14 patients with hepatic hemangiomatosis, and changes in hepatic hemangiomatosis were evaluated. The mean interval between the oldest and latest dynamic CT or MRI was 94.5 months. Enlargement, no change, and shrinkage of hepatic hemangiomatosis were observed in seven, six, and one patient, respectively. The relationship between the change in the hepatic hemangioma size and hepatic hemangiomatosis in the 14 patients is shown in Table 2 . This table suggests a tendency that hepatic hemangiomatosis enlarges in patients with a hepatic hemangioma that shows enlargement. In all patients without hepatic hemangiomatosis (n = 7), hepatic hemangiomas were multiple. We evaluated the effect of age and sex on the growth rate of hepatic hemangiomas. Since the median age of the enrolled patients was 51 years, the growth rate of hepatic hemangiomas was evaluated in women aged 50 or <50 years and in those >50 years. This evaluation was not conducted in male patients as only five male participants were enrolled. Notably, for women who reached the age of 51 years during the follow-up period, growth rates were calculated for those aged 50 or <50 years and those aged >50 years; both values were used for evaluation. The median growth rate of hepatic hemangiomas in female patients aged 50 or <50 years was 4.0 mm/year and that in those aged >50 years was −2.2 mm/year. Clinical course Hepatic hemangiomas were treated in two patients, and both received right lobectomy. Although the two patients were asymptomatic, surgery was performed owing to the enlargement of the hepatic hemangiomas and the patient’s desire for surgical removal. In the remaining 20 patients, hepatic hemangiomas were followed without treatment. Two patients died during follow-up. A patient died from prostate cancer progression. The cause of death could not be identified in the other patient. This patient complained of abdominal distention and difficulty moving during follow-up. One of the two patients with abdominal distention at the baseline study developed a bleeding tendency during follow-up and was diagnosed with Kasabach-Merritt syndrome. The other patient showed no changes in abdominal distention. Two more patients developed symptoms during follow-up: abdominal distention in one patient and abdominal distention and difficulty moving in the other. In two patients, follow-up was discontinued because the hemangioma shrunk in size after a long follow-up period. A patient was lost during follow-up. Hepatic hemangioma rupture was not observed in any patient.
Discussion The median growth rate of hepatic hemangiomas was 2.5 mm/year. This result demonstrates a slow growth rate of hepatic hemangiomas and is comparable to the results of a previous study that showed a growth rate of 4.7 mm/year for hepatic hemangiomas with diameters >10 cm [ 13 ]. Spontaneous regression of hepatic hemangiomas is well known, and previous studies have reported various rates of decrease in the diameter of hepatic hemangiomas, ranging from 8.6% to 45.4% [ 12 , 13 , 15 , 16 , 20 ]. The average diameter of hepatic hemangiomas evaluated in previous studies was less than 5 cm. This study demonstrated that hepatic hemangiomas measuring >10 cm commonly shrink (22.7%). The median growth rate of hepatic hemangiomas in women aged 50 or <50 years was 4.0 mm/year and that in those aged >51 years was −2.2 mm/year. This result is in line with the results of previous studies: hepatic hemangiomas in younger female patients (<40 or 45 years) tend to enlarge, and those in older female patients (>40 or 45 years) tend to shrink [ 15 , 16 ]. It is advisable to determine the treatment strategy with the understanding that hepatic hemangiomas >10 cm grow at a slow rate and occasionally shrink. Hemangiomatosis was observed in 15 patients (68.2%). Our results were consistent with those of a previous study that evaluated the imaging findings of patients with hepatic hemangiomas >8 cm and found that 44% of the patients had hepatic hemangiomatosis [ 17 ]. Changes in size in hepatic hemangiomatosis were evaluated in 14 patients. Enlargement of hepatic hemangiomatosis was observed in seven patients, and enlargement of hepatic hemangiomas was observed in five of the seven patients. The presence and extent of hepatic hemangiomatosis are important because they influence the optimal liver resection technique and the functional residual liver volume [ 17 ]. Thus, clinicians should pay attention to temporal changes not only in hepatic hemangiomas but also in hemangiomatosis, especially in patients with enlarging hepatic hemangiomas. During follow-up, four patients experienced the appearance or aggravation of symptoms. Two patients died. One patient died from prostate cancer. In the other patient, the cause of death was not clear. Twenty patients were followed without treatment, while two patients underwent liver resection. Kasabach-Merritt syndrome developed in one patient. Considering the relatively long follow-up period (median: 95.5 months), this study demonstrates that hepatic hemangiomas with a diameter >10 cm are clinically stable for a long time, and death associated with hepatic hemangiomas is not frequent. This study has some limitations. First, the study was conducted at a single hospital, and the study population was small. Second, this was a retrospective study, and information on symptoms of the patients’ may be incomplete. Third, the evaluation of temporal changes in hemangiomatosis was subjective and lacked objectivity.
Conclusions Hepatic hemangiomas >10 cm show slow growth rates and occasionally shrink during follow-up. Hepatic hemangiomatosis is commonly observed in patients with hepatic hemangiomas, and hemangiomatosis, as well as hemangioma, changes during the follow-up period. Although some patients with hepatic hemangiomas experience new symptoms or symptom aggravation, deaths associated with hepatic hemangiomas are rare.
Introduction: The natural history of a large hepatic hemangioma is important in determining the treatment strategy. Although several studies have assessed the natural history of hepatic hemangiomas, no study has focused on hepatic hemangiomas measuring >10 cm. The aim of this study was to assess the natural history of hepatic hemangiomas measuring >10 cm by evaluating imaging findings and clinical course. Methods: Computed tomography (CT) and magnetic resonance imaging (MRI) reports at Kyoto University Hospital, Kyoto, Japan, between January 2001 and March 2023 were retrospectively searched to find adult patients with hepatic hemangiomas >10 cm. Patients who were followed up without treatment for over six months were included. The maximum diameter of the hepatic hemangioma was compared between the baseline and the final CT or MRI. The clinical course of the patients was evaluated. Results: Twenty-two patients (17 women, five men; median age, 51 years) were identified. The median diameter of hepatic hemangiomas in the baseline study was 114 mm. Two patients had abdominal distention at the time of the baseline imaging, whereas the others were asymptomatic. After follow-up without treatment (the median; 95.5 months), enlargement, no change, shrinkage of hepatic hemangioma was observed in six, 11, and five patients, respectively. The median growth rate of hepatic hemangiomas was 2.5 mm/year. Two patients underwent liver resection for hepatic hemangioma, while the others were followed up without treatment. In four patients, symptoms appeared or worsened. Two patients died: one patient died from prostate cancer progression; the cause of death for the other was not confirmed. Conclusion: Hepatic hemangiomas show a slow growth rate during follow-up, and shrinkage is occasionally observed. Some patients experience new symptoms or aggravation of symptoms; however, deaths associated with hepatic hemangiomas are uncommon.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50563
oa_package/56/49/PMC10788688.tar.gz
PMC10788689
38226109
Introduction and background Palliative care, a discipline rooted in compassionate and holistic patient-centered care, has become increasingly recognized as essential in managing complex and life-limiting illnesses. Within medical specialties, respiratory medicine stands out as a field where the integration of palliative care is paramount [ 1 ]. Palliative care is a specialized approach to healthcare that focuses on improving the quality of life for patients and their families facing life-threatening illnesses. It encompasses a holistic model of care that addresses not only the physical symptoms but also attends to individuals' psychosocial, spiritual, and emotional needs. In respiratory medicine, palliative care goes beyond traditional disease-focused treatments to enhance the overall well-being of patients grappling with chronic and often progressive pulmonary conditions [ 2 ]. Palliative care seeks to alleviate suffering by managing symptoms, fostering effective communication, and supporting patients in making informed decisions about their care. It is not synonymous with end-of-life care; instead, it is a dynamic and integrated approach that can be implemented at any stage of a respiratory illness, from the time of diagnosis throughout the trajectory of the disease [ 3 ]. The significance of integrating palliative care into the fabric of respiratory medicine lies in its ability to enhance the quality of life for patients facing the challenges of diseases, such as chronic obstructive pulmonary disease (COPD), idiopathic pulmonary fibrosis (IPF), and lung cancer. Respiratory diseases often bring with them a burden of debilitating symptoms, breathlessness, and a decline in overall functioning, making the role of palliative care pivotal in addressing these issues [ 4 ]. Furthermore, palliative care in respiratory medicine is essential in navigating the complex decision-making processes associated with advanced directives, goals of care, and end-of-life discussions. The incorporation of palliative care principles ensures that patients receive care that aligns with their values and preferences, fostering a sense of control and dignity amid the challenges posed by respiratory illnesses [ 5 ]. This comprehensive review aims to provide an in-depth exploration of palliative care within the context of respiratory medicine. By examining the principles, challenges, and advancements in this field, we aim to contribute to understanding how palliative care can be optimally integrated into the care continuum for individuals facing respiratory diseases. This review aspires to offer a valuable resource for healthcare professionals, researchers, and policymakers invested in improving the holistic care of patients with respiratory conditions through a critical analysis of existing literature, practical insights, and emerging trends. As we navigate the intricate landscape of compassion in respiratory medicine, our goal is to shed light on the pivotal role of palliative care in enhancing the lives of individuals confronting these challenging illnesses.
Conclusions Palliative care in respiratory medicine emerges as a crucial and evolving discipline that demands attention and proactive engagement. This comprehensive review has highlighted the significance of early integration, effective communication, and caregiver support in optimizing the quality of life for individuals facing conditions, such as COPD, IPF, and lung cancer. It underscores the need for a holistic approach beyond addressing physical symptoms to encompass caregivers' psychosocial and spiritual dimensions. As we navigate challenges, such as late referrals and communication barriers, there is a call to action for healthcare providers, policymakers, and researchers alike. Investment in education, advocacy for accessible services, and collaborative efforts across disciplines are essential for shaping the future of palliative care in respiratory medicine. By embracing innovation, addressing research gaps, and fostering a commitment to patient-centered care, we can collectively enhance the well-being of individuals and their families confronting the complexities of respiratory diseases.
Palliative care has emerged as a crucial aspect of comprehensive healthcare, particularly in respiratory medicine. This review navigates the intricate landscape of palliative care in the context of respiratory diseases, including chronic obstructive pulmonary disease (COPD), idiopathic pulmonary fibrosis (IPF), and lung cancer. The exploration begins with a comprehensive examination of palliative care's definition, significance, and purpose in respiratory medicine. It progresses to understanding common respiratory diseases, their impact on patients' quality of life, and the nuances of disease progression and prognosis. Delving into the principles of palliative care, the review highlights the importance of a patient- and family-centered approach, emphasizing the multidisciplinary collaboration required for holistic care. Symptom management takes center stage, with a detailed exploration of dyspnea, cough, and pain, covering pharmacological and non-pharmacological interventions. The psychosocial and spiritual dimensions are then unveiled, recognizing the psychological impact of respiratory diseases and the significance of addressing spiritual needs with cultural sensitivity. Communication in palliative care is explored through breaking lousy news, advance care planning, and shared decision-making. The section acknowledges the complex considerations surrounding end-of-life care, including recognizing the end-of-life phase, establishing care goals, and withdrawing life-sustaining therapies. Recognizing the indispensable role of caregivers, the review underscores the importance of caregiver support. It delineates strategies for providing emotional and practical support alongside a crucial focus on self-care for caregivers who shoulder the responsibilities of providing palliative care. As the exploration concludes, the challenges in implementing palliative care in respiratory medicine are outlined, from late referrals to communication barriers. However, the review also envisions a future marked by innovation, with emerging approaches, such as telehealth and personalized medicine, offering promising avenues for improvement. Research gaps and areas for improvement are identified, emphasizing the need for a collaborative effort to enhance the quality of palliative care for individuals facing respiratory diseases. The review culminates in a call to action, urging early palliative care integration, investment in education and training, research initiatives, advocacy for accessible services, and collaboration across disciplines. By heeding this call, healthcare providers, researchers, and policymakers can collectively contribute to the evolution and enhancement of palliative care in the challenging landscape of respiratory medicine.
Review Understanding respiratory diseases Overview of Common Respiratory Diseases COPD: COPD is a prevalent respiratory condition characterized by persistent airflow limitation. Often caused by exposure to noxious particles or gases, such as those found in cigarette smoke, COPD encompasses conditions, including chronic bronchitis and emphysema. The chronic and progressive nature of COPD significantly impacts patients' respiratory function, leading to symptoms, such as breathlessness, chronic cough, and increased susceptibility to respiratory infections [ 6 ]. IPF: IPF is a chronic and irreversible interstitial lung disease characterized by the progressive scarring of the lung tissue. The exact cause of IPF remains unknown, and its diagnosis often comes after excluding other potential causes of pulmonary fibrosis. As the disease advances, patients experience worsening dyspnea, a decline in lung function, and impaired oxygen exchange. IPF poses significant challenges due to its unpredictable progression and limited treatment options [ 7 ]. Lung cancer: Lung cancer, a leading cause of cancer-related mortality worldwide, encompasses a range of malignancies originating in the lung tissue. It is often associated with a history of tobacco use, although non-smokers can also be affected. Lung cancer's impact extends beyond the lungs, affecting systemic health and quality of life. Symptoms may include persistent cough, hemoptysis, and respiratory distress. The prognosis varies based on the type and stage of lung cancer at the time of diagnosis [ 8 ]. Impact on Patients' Quality of Life The burden of respiratory diseases on patients' quality of life is profound and multifaceted. Chronic symptoms, such as breathlessness, coughing, and fatigue, can significantly limit physical activities and daily functioning. The psychological impact is equally significant, with patients often experiencing anxiety and depression due to the chronic nature and uncertainty associated with respiratory illnesses. The progressive nature of diseases, such as COPD and IPF, adds a layer of complexity, impacting not only the patients but also their families and caregivers [ 9 ]. Disease Progression and Prognosis Understanding the trajectory of respiratory diseases is crucial for healthcare providers and patients alike. Disease progression varies among individuals and is influenced by factors, such as the underlying cause, comorbidities, and response to treatment. Prognosis assessment involves considering the stage of the disease, functional impairment, and the presence of complications. Providing patients and their families with accurate and realistic information about the expected course of the disease is essential for fostering informed decision-making and facilitating timely discussions about palliative care options. In the subsequent sections, we will delve into how palliative care can play a pivotal role in addressing the unique challenges posed by the progression of respiratory diseases [ 10 ]. Palliative care principles Integration of Palliative Care in Respiratory Medicine Integrating palliative care into respiratory medicine is crucial for optimizing patient outcomes and experiences. Palliative care principles should be seamlessly woven into the fabric of care from the point of diagnosis throughout the trajectory of the respiratory disease. Early integration ensures that patients and their families receive the necessary support to cope with the physical and emotional challenges associated with conditions, such as COPD, IPF, and lung cancer. This integration is not about replacing curative treatments but enhancing the overall care experience, promoting shared decision-making, and aligning interventions with patients' values and preferences [ 11 ]. Multidisciplinary Approach Palliative care in respiratory medicine necessitates a multidisciplinary approach that brings together healthcare professionals with diverse expertise to address the complex needs of patients. This may include pulmonologists, nurses, respiratory therapists, social workers, psychologists, and spiritual care providers. The collaboration of these specialists ensures a comprehensive assessment of the patient's physical and psychosocial needs and facilitates a coordinated effort to implement tailored interventions. The multidisciplinary approach is essential for delivering holistic care that acknowledges the diverse challenges of individuals with respiratory diseases [ 12 ]. Patient and Family-Centered Care At the heart of palliative care in respiratory medicine is the principle of patient and family-centered care. Recognizing that illness affects not only the individual but also their loved ones, palliative care actively involves patients and their families in decision-making processes. Open and honest communication, shared decision-making, and sensitivity to cultural and individual values are integral components of patient and family-centered care. This approach empowers patients to actively participate in their care actively, fostering a sense of control and dignity amid the complexities of respiratory diseases [ 13 ]. Symptom management in respiratory patients Dyspnea Dyspnea, characterized by difficulty breathing, is a hallmark symptom in various respiratory diseases, emphasizing the significance of understanding its underlying causes for targeted management. The causes of dyspnea can span bronchoconstriction, lung parenchymal changes, or heightened respiratory effort. Assessment of dyspnea involves a thorough examination, encompassing the patient's medical history, a physical examination, and, when necessary, imaging studies and pulmonary function tests. Pharmacological interventions are vital in managing dyspnea, with options, including bronchodilators (beta-agonists and anticholinergics), corticosteroids, and oxygen therapy [ 14 ]. Bronchodilators work to alleviate bronchoconstriction and enhance airflow, while corticosteroids, particularly in conditions, such as COPD, address inflammation. Oxygen therapy, when indicated, aims to improve oxygen saturation and alleviate dyspnea. Non-pharmacological interventions are equally pivotal, incorporating pulmonary rehabilitation, breathing exercises, and relaxation techniques to enhance respiratory muscle function and overall capacity. Supportive measures, such as positioning, fan therapy, and activity pacing, further contribute to relieving dyspnea, presenting a comprehensive approach to its management [ 14 ]. Cough Cough, a prevalent symptom in various respiratory diseases, can be attributed to diverse factors, such as airway inflammation, irritants, or tumors. Assessing the root cause of a cough is pivotal and typically involves a comprehensive examination encompassing a detailed clinical history, a thorough physical examination, and, when deemed necessary, the utilization of imaging studies. Identifying the underlying factors contributing to cough is crucial for effective management in respiratory medicine [ 15 ]. Regarding treatment options, the approach aims to address the specific cause of the cough. In respiratory medicine, interventions may include bronchodilators to alleviate airway constriction, mucolytic agents to facilitate the thinning of mucus for more straightforward clearance, or antitussive medications when the cough becomes particularly distressing. Tailoring treatment to the individual's specific condition underscores the importance of a personalized and targeted approach in managing cough as a symptom of respiratory diseases [ 15 ]. Pain Management Pain in respiratory patients can emanate from diverse sources, encompassing chest wall pain, pleuritic pain, or discomfort associated with various procedures and interventions. Identifying the specific source of pain is paramount for tailoring effective and targeted pain management strategies. Chest wall pain may be related to musculoskeletal issues or inflammation, while pleuritic pain often results from inflammation of the pleura. Pain associated with medical procedures and interventions, such as surgery or diagnostic tests, further adds to the complexity of pain management in this population [ 16 ]. Analgesic strategies employed in respiratory medicine embrace a multifaceted approach, combining pharmacological and non-pharmacological interventions. Pharmacological options include the use of nonsteroidal anti-inflammatory drugs (NSAIDs), opioids, and adjuvant medications to address pain stemming from different origins. Non-pharmacological interventions are complementary, involving physical therapy to enhance musculoskeletal function, breathing exercises to improve respiratory function, and relaxation techniques to alleviate overall discomfort. This comprehensive approach to pain management recognizes the multifaceted nature of pain in respiratory patients, striving to enhance their overall well-being through personalized and effective interventions [ 16 ]. Psychosocial and spiritual dimensions Psychological Impact of Respiratory Diseases Anxiety and depression: Respiratory diseases often evoke anxiety and depression due to the challenges associated with chronic symptoms, reduced functional capacity, and uncertainty about the future. Anxiety may arise from the fear of breathlessness or exacerbations, while depression can result from the impact of the disease on daily life and social interactions. Screening tools and open communication with patients can help identify these psychological challenges [ 17 ]. Coping mechanisms: Coping with respiratory disease's psychological impact involves individual and support-based strategies. Patients benefit from education about their condition, support groups, and counseling services. Cognitive-behavioral therapy (CBT) and mindfulness techniques can be effective in helping patients develop coping mechanisms and resilience in the face of the emotional challenges posed by respiratory illnesses [ 18 ]. Addressing Spiritual Needs Importance of spiritual care: Spiritual care recognizes the importance of addressing the existential and spiritual dimensions of an individual's experience with illness. In respiratory medicine, acknowledging and respecting patients' spiritual beliefs and values is integral to providing holistic care. Spiritual care contributes to a sense of meaning, purpose, and connectedness, which can positively impact overall well-being, even in the face of chronic and progressive diseases [ 19 ]. Cultural sensitivity: Cultural sensitivity in addressing spiritual needs is paramount. Different cultures have diverse beliefs and practices about illness, death, and spirituality. Healthcare providers should engage in open and respectful conversations with patients to understand their cultural context. This may involve collaboration with spiritual or religious leaders when appropriate, ensuring that care aligns with the patient's cultural and spiritual preferences [ 20 ]. Communication in palliative care Breaking Bad News Breaking bad news is a delicate and crucial aspect of palliative care, especially in respiratory medicine, where conditions can be chronic and progressive. Communication should be honest, empathetic, and tailored to the patient's needs. This involves providing information clearly and understandably, gauging the patient's readiness to receive information, and addressing emotional responses. Supporting patients and their families through bad news discussions is vital for building trust and facilitating informed decision-making [ 21 ]. Advance Care Planning Advanced care planning involves discussing and documenting an individual's preferences and values regarding their future healthcare, especially in a potential decline in health or incapacity. This process empowers patients to make decisions about their care that align with their values. In respiratory medicine, advance care planning may include discussions about intubation, mechanical ventilation, and preferences for end-of-life care. These conversations should be ongoing, revisited regularly, and documented in the patient's medical record [ 22 ]. Shared Decision-Making Shared decision-making is a collaborative approach where healthcare providers and patients work together to make healthcare decisions. In respiratory medicine, shared decision-making is critical due to the complex nature of treatment options, potential side effects, and the impact on the patient's quality of life. This approach involves presenting information about treatment options, discussing potential benefits and risks, and considering the patient's values and preferences. Shared decision-making fosters a partnership between healthcare providers and patients, ensuring that care aligns with the individual's goals and priorities [ 23 ]. Addressing Cultural and Ethical Considerations Cultural and ethical considerations play a significant role in communication within the palliative care context. Recognizing and respecting diverse cultural beliefs and values related to illness, death, and decision-making is essential. Healthcare providers should approach conversations with cultural humility, acknowledging their biases and actively seeking to understand the cultural context of each patient. Ethical considerations involve ensuring autonomy, beneficence, and non-maleficence in decision-making processes. This includes navigating issues related to withdrawing life-sustaining treatments and ensuring that decisions align with the patient's values and ethical principles [ 24 ]. End-of-life care in respiratory medicine Recognizing the End-of-Life Phase Clinical indicators: Recognizing the end-of-life phase in respiratory medicine involves assessing clinical indicators that suggest a decline in health. This may include a progressive decrease in functional capacity, increased symptom burden, recurrent infections, and a trajectory of irreversible decline despite interventions. Regular assessments, including discussions with the patient and their family, can aid in identifying when the end-of-life phase is approaching [ 25 ]. Prognostication challenges: Prognostication in respiratory diseases can be challenging due to the unpredictable nature of conditions, such as COPD, IPF, and lung cancer. Healthcare providers should exercise caution in predicting precise timelines but can provide general information about the expected trajectory based on the patient's disease characteristics [ 26 ]. Goals of Care at the End of Life Establishing patient-centered goals: End-of-life care in respiratory medicine demands a patient-centered approach, recognizing the unique values and priorities of the individual facing a life-limiting illness. This involves collaborative discussions among the patient, their family, and the healthcare team to identify goals aligning with their wishes. By fostering open communication, healthcare providers can gain insights into patients' aspirations, concerns, and what matters most to them. Realistic and achievable goals are formulated, emphasizing aspects, such as maximizing comfort, preserving dignity, and facilitating meaningful interactions with loved ones. This patient-centered goal-setting process ensures that the care provided is tailored to the individual, enhancing their overall quality of life during the challenging end-of-life phase [ 27 ]. Advance directives and documentation: Advance care planning is a crucial component of end-of-life care in respiratory medicine, involving the documentation of the patient's preferences and goals for their healthcare. Advance directives and living will serve as essential documents that articulate the patient's choices regarding medical interventions, resuscitation preferences, and broader care goals. These documents provide a roadmap for healthcare decisions when patients cannot communicate their wishes. Ensuring that these documents are in place and up-to-date is imperative. Clear communication of the documented preferences to the healthcare team guarantees that medical decisions align with the patient's established values and priorities. This comprehensive approach to advance directives and documentation empowers individuals to have a say in their care, even in times of incapacity and promotes a more informed and patient-centered decision-making process [ 22 ]. Withdrawal of Life-Sustaining Therapies Transparent communication: Discussions surrounding the withdrawal of life-sustaining therapies demand a communication approach characterized by transparency and empathy. Healthcare providers are tasked with openly addressing the complexities of this decision, ensuring that the patient and their family comprehend both the potential benefits and burdens associated with continuing or discontinuing interventions. Transparent communication facilitates informed decision-making, allowing the patient and their family to understand the implications of the choices ahead fully. This approach, grounded in openness and empathy, fosters trust between healthcare providers and patients or their surrogate decision-makers during a sensitive and challenging time [ 28 ]. Multidisciplinary collaboration: The decision to withdraw life-sustaining therapies is multifaceted and requires input from various healthcare professionals. A multidisciplinary team encompassing respiratory therapists, nurses, social workers, and palliative care specialists collaborates to ensure a comprehensive and well-informed decision-making process. Each team member brings a unique perspective and expertise, contributing to a holistic understanding of the patient's medical, emotional, and social context. This collaborative effort ensures that the decision aligns with the patient's wishes, is compassionate, and respects the values and goals established by the patient and their family [ 29 ]. Psychosocial support: Initiating the withdrawal of life-sustaining therapies introduces a significant emotional burden for both patients and their families. Recognizing the emotional challenges inherent in this transition, the provision of psychosocial support becomes paramount. This support encompasses a range of services, including counseling, spiritual care, and bereavement support. Counseling services offer a space for individuals to express their feelings, fears, and uncertainties, providing emotional support during this difficult time. Spiritual care attends to the existential and spiritual dimensions, offering solace and meaning. Bereavement support helps individuals cope with grief and loss, recognizing that the decision to withdraw from life-sustaining therapies marks a profound and impactful moment in the patient's and family's journey. The ready availability of these psychosocial support services ensures that individuals facing this transition receive the comprehensive and compassionate care needed to navigate the emotional complexities of such a decision [ 30 ]. Caregiver support Importance of Caregivers in Palliative Care Integral to the care team: Caregivers are not mere observers but integral palliative care team members, contributing essential support to patients navigating respiratory diseases. Their role extends beyond physical assistance, encompassing emotional, psychological, and often spiritual dimensions of care. Recognizing the caregiver as a valued care team member is foundational for delivering truly holistic, patient-centered care. Caregivers bring unique insights into the daily experiences and needs of the patient, acting as a bridge between the healthcare team and the individual receiving care. Integrating their perspectives and expertise ensures a more comprehensive understanding of the patient's condition and facilitates tailored care that addresses the multifaceted challenges of respiratory diseases [ 31 ]. Enhancing patient well-being: Caregivers significantly influence the overall well-being of patients facing respiratory diseases. Their presence gives the patient a sense of security, comfort, and emotional stability. Beyond the physical tasks of caregiving, they often become advocates, ensuring that the care provided aligns with the individual's values, preferences, and wishes. Caregivers are attuned to the emotional and psychological needs of the patient, offering companionship and a supportive presence during challenging times. By actively engaging with caregivers and recognizing their contributions, healthcare providers can foster a collaborative and compassionate care environment that prioritizes the patient's well-being. This acknowledgment validates the essential role of caregivers and strengthens the patient-caregiver-healthcare team partnership, ultimately contributing to a more positive and patient-centered care experience [ 32 ]. Providing Emotional and Practical Support Open communication: Establishing open and regular communication channels with caregivers is fundamental to providing comprehensive, patient-centered care. This involves ongoing discussions about the patient's condition and treatment plans and addressing caregivers' concerns or uncertainties. Transparent communication keeps caregivers informed about the patient's health status and fosters a sense of trust and collaboration between healthcare providers and caregivers. By maintaining open lines of communication, healthcare teams can ensure that caregivers are well-equipped to support the patient effectively, understand the rationale behind medical decisions, and actively participate in the care planning process [ 33 ]. Emotional support: Respiratory diseases pose emotional challenges for patients and their caregivers. Providing emotional support to caregivers is essential for addressing the psychological impact of caregiving. This support involves acknowledging the caregiver's feelings, offering empathetic listening, and validating the emotional toll that caring for someone with a respiratory illness can take. Healthcare providers play a crucial role in recognizing and addressing caregiver stress, anxiety, or grief. Access to counseling services or support groups can provide additional avenues for caregivers to express their emotions, share experiences, and receive guidance on coping strategies. By offering emotional support, healthcare providers contribute to the overall well-being of both the patient and caregiver [ 34 ]. Practical assistance: Caregivers often juggle various responsibilities, including complex caregiving tasks, such as medication management, assisting with activities of daily living, and coordinating medical appointments. Providing practical assistance is essential for helping caregivers navigate their roles effectively. This can include offering educational resources that provide information on disease management, organizing training sessions to enhance caregiving skills, and facilitating access to respite care services. Respite care allows caregivers to take breaks, recharge, and attend to their well-being. By addressing the practical aspects of caregiving, healthcare providers support caregivers in providing quality care to the patient while also attending to their own needs, ultimately contributing to a more sustainable and positive caregiving experience [ 35 ]. Self-Care for Caregivers Recognizing burnout and stress: Caregiving is a demanding role that can affect caregivers' physical and emotional well-being, leading to burnout and stress. Recognizing the signs of burnout, such as fatigue, changes in mood, or a sense of overwhelm, is crucial for caregivers. Healthcare providers are vital in encouraging caregivers to be attentive to their mental and physical health. Regular check-ins and open communication channels with healthcare providers create opportunities to discuss caregiver stressors, identify early signs of burnout, and collaboratively develop strategies to manage and mitigate stress [ 36 ]. Promoting self-care practices: Educating caregivers about self-care is essential for maintaining their well-being. This involves promoting healthy lifestyle habits, ensuring caregivers prioritize adequate rest, and encouraging engagement in leisure activities. Caregivers often neglect their needs while caring for a loved one, and healthcare providers can offer guidance on incorporating self-care practices into their routines. Providing resources for respite care, where caregivers can take a temporary break from their caregiving responsibilities, and facilitating access to support groups create a supportive environment for caregivers to share experiences and learn strategies for managing their challenges [ 37 ]. Offering professional support: Acknowledging that caregiving may require professional assistance is crucial to caregiver support. Healthcare providers can be pivotal in connecting caregivers with additional resources and services. This may include home healthcare services to provide additional assistance with caregiving tasks, counseling services to address emotional and mental health needs and support groups where caregivers can connect with others facing similar challenges. Professional support acknowledges the complexity of the caregiving role and ensures that caregivers have access to the necessary resources to prevent burnout and maintain their well-being [ 38 ]. Challenges and future directions Challenges in Implementing Palliative Care in Respiratory Medicine Late referrals to palliative care: Late referrals to palliative care for respiratory patients pose a significant challenge, as individuals may need more comprehensive support that could enhance their quality of life throughout their disease. Overcoming this challenge involves addressing the reluctance among healthcare providers to initiate palliative care discussions early in the trajectory of respiratory diseases. Increasing awareness about the benefits of early palliative care, along with providing education to healthcare providers, can help shift the paradigm. By emphasizing the advantages of integrating palliative care from the point of diagnosis, healthcare professionals can work toward ensuring that individuals and their families receive timely and holistic support tailored to their evolving needs [ 39 ]. Communication barriers: Effective communication about palliative care, especially discussions related to end-of-life care, can be challenging for healthcare providers. Resistance may arise from patients, families, or even fellow professionals due to the sensitive nature of these conversations. To address these communication barriers, ongoing training programs are essential. Improving communication skills, including empathetic listening and discussing palliative care in a culturally sensitive manner, can enhance healthcare providers' ability to navigate these discussions. In addition, increasing public awareness about the benefits of early palliative care involvement can help normalize these conversations, making them more acceptable for all parties involved [ 40 ]. Limited access to palliative care services: Disparities in access to palliative care services, especially in rural or underserved areas, present a challenge in ensuring equitable care for all respiratory patients. Overcoming these disparities requires a multifaceted approach. Addressing logistical and resource challenges involves strategic planning to allocate resources efficiently and leveraging technology to provide virtual access to palliative care services. Expanding palliative care training for healthcare providers in various settings, including rural communities, can enhance the availability of these services. Advocating for policies prioritizing and enhancing the accessibility of palliative care ensures that individuals facing respiratory diseases, regardless of geographic location or socioeconomic status, have access to the support and care they need throughout their healthcare journey [ 41 ]. Innovations and Emerging Approaches Telehealth and remote monitoring: Integrating telehealth and remote monitoring technologies represents a promising frontier in palliative care for respiratory patients. These innovations leverage digital tools to bridge geographical distances and enhance care delivery. Virtual consultations enable healthcare providers to connect with patients and their caregivers, offering timely assessments and interventions without the constraints of physical proximity. Remote monitoring capabilities allow for the continuous tracking of symptoms and vital signs, providing a real-time understanding of the patient's condition. This not only improves accessibility to care but also contributes to the continuity of support, enabling healthcare providers to respond promptly to changes in the patient's health status. By embracing telehealth and remote monitoring, palliative care for respiratory patients can become more patient-centered, adaptable, and responsive to the dynamic nature of their healthcare needs [ 42 ]. Personalized medicine in symptom management: Advancements in personalized medicine open new avenues for tailoring symptom management strategies in respiratory palliative care. By considering an individual's unique characteristics and genetic makeup, healthcare providers can design interventions that align with the patient's needs. Personalized approaches to symptom management not only optimize the effectiveness of interventions but also aim to minimize potential side effects. This level of customization enhances the overall patient experience by recognizing and addressing the variability in how individuals respond to treatments. As the field of personalized medicine evolves, it can revolutionize how respiratory disease symptoms are managed in palliative care, promoting a more targeted and patient-centric approach to symptom relief and overall care [ 43 ]. Research Gaps and Areas for Improvement Understanding patient and caregiver perspectives: There is a critical need for research that delves into the perspectives of both patients and caregivers regarding palliative care in respiratory medicine. By gaining insights into their unique needs, preferences, and experiences, researchers can inform the development of more patient-centered interventions. Understanding the challenges and priorities from the viewpoint of those directly affected by respiratory diseases and palliative care allows for the creation tailored strategies that resonate with the individuals involved. This research validates the importance of patient and caregiver voices and ensures that palliative care interventions align with the values and preferences of those receiving and providing care [ 44 ]. Impact of palliative care on quality of life: Further research is essential to comprehensively assess the impact of palliative care on the quality of life for individuals grappling with respiratory diseases. This research involves evaluating the effectiveness of palliative care interventions, exploring the long-term outcomes for patients and their caregivers, and identifying the factors contributing to positive experiences. By rigorously examining the impact of palliative care, researchers can contribute valuable evidence that informs healthcare practices, policies, and interventions. Understanding the specific ways in which palliative care enhances the quality of life for respiratory patients is integral to fostering continuous improvement in the delivery of patient-centered care [ 45 ]. Education and training for healthcare providers: Research aimed at evaluating the effectiveness of educational and training programs for healthcare providers in palliative care is imperative. This research focus includes understanding the barriers that healthcare providers may encounter when implementing palliative care principles and identifying strategies to enhance their knowledge and skills. By assessing the impact of educational interventions, researchers can contribute to the ongoing refinement of training programs, ensuring that healthcare providers are well-equipped to integrate palliative care seamlessly into respiratory medicine. This research also plays a pivotal role in advocating for continuous education in palliative care, fostering a healthcare environment prioritizing delivering holistic, patient-centered care [ 46 ].
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50613
oa_package/a7/c6/PMC10788689.tar.gz
PMC10788690
38226116
Introduction Vernal keratoconjunctivitis (VKC) is an acute or chronic inflammation of the cornea and conjunctiva (tarsal and bulbar) that occurs in children and adolescents with seasonal exacerbations [ 1 , 2 ]. It has a higher incidence in a hot and dry climate and is common in South Asia, Central Africa, and South America. It is an important public health problem in Asia, having a significant effect on the quality of life and productivity of individuals affected by it [ 3 ]. VKC starts as a type 1 (immediate) hypersensitivity reaction in which antigen causes cross-linking of two adjacent immunoglobulin E (IgE) molecules that initiates degranulation of mast cells inducing the release of histamine leading to the classical signs of allergic reaction. Later, the disease becomes chronic due to type 2 lymphocyte (helper) invasion with activation of fibroblasts in the cornea and conjunctiva, leading to corneal and tarsal conjunctival changes. VKC must be differentiated from seasonal and other allergic conjunctivitis, which are type 1 hypersensitivity (IgE) mediated and have only conjunctival involvement in the majority of cases [ 2 , 4 , 5 ]. A variety of inflammatory mediators are released by mast cells, such as histamine, leukotrienes (IL), prostaglandins, chymase, tryptase, chondroitin sulfate, and heparin. These cause increased vascular permeability with the involvement of eosinophils, T and B lymphocytes, and fibroblasts that increase collagen deposition in the conjunctiva. IL-4 and IL-13 specifically cause the formation of conjunctival giant papillae by extracellular matrix deposition in fibroblasts [ 4 - 8 ]. The clinical presentation of VKC is asymmetric in most cases and is classified based on the affected area, which can be palpebral, limbal, or involving both of these components. The conjunctiva shows an increase in vessel permeability with cellular recruitment and epithelial hyperplasia. The upper tarsus is commonly involved with the formation of giant papillae with a diameter of more than 0.3 mm producing the cobblestone appearance (the hallmark of VKC). In severe VKC, papillary hypertrophy produces cauliflower-like excrescences. Furthermore, the advanced involvement of papillary conjunctiva can lead to symblepharon formation. Limbal signs include thickening and opacification of the conjunctiva and gelatinous papillae along with a perilimbal collection of degenerated epithelial cells and eosinophils known as Horner-Trantas dots. Limbal disease can lead to limbal stem cell deficiency and pannus formation with neovascularization of the cornea. The corneal features depend upon the severity of conjunctival inflammation. The toxic effects of inflammatory mediators may lead to punctate epithelial erosions (PEEs). These PEE coalesce into macroerosions and the accumulation of mucus and fibrin into macroerosions leads to the formation of shield ulcers [ 9 ]. There are certain unmet needs regarding the management of VKC, such as the absence of diagnostic criteria, unclear pathogenesis, and ineffectiveness of the mainstay topical anti-allergic treatment (antihistamines) in cases with moderate to severe diseases. This lack of standardized diagnostic and treatment protocols leads to variations in management regimes among different countries. Moreover, safety and complications of drugs and compliance with therapy are other issues. Thus, considering the immunomodulatory role of tacrolimus, it will be beneficial in the complete control of moderate to severe VKC. After all, a satisfactory and regular treatment based on the tenets of individual patient requirements and long-term follow-up is essential to see the efficacy and safety of treatment with tacrolimus that will help in setting new guidelines in treatment protocols in the future [ 3 , 10 - 12 ]. The treatment of VKC should be according to the duration and frequency of symptoms and the severity of signs. Mild cases are improved with topical mast cell stabilizers, antihistamines, combine antihistamine mast cell stabilizers, non-steroidal anti-inflammatory drugs, and lubricants. However, severe cases frequently have remissions and relapses, require prolonged treatment, and result in visual compromise, if not properly treated [ 2 ]. Corticosteroids are added in addition to anti-allergic drugs, and they have to be used in long term to control symptoms [ 13 ]. However, steroids’ withdrawal increases disease severity, and long-term use results in cataract formation, glaucoma, and keratoconus [ 14 ]. Immunomodulators, such as cyclosporine, INFα2b, and tacrolimus, are alternatives to steroids in controlling moderate to severe symptoms. Cyclosporine 1% is given three to four times per day and is effective for seasonal exacerbations. Tacrolimus is almost 100 times more potent as compared to cyclosporin and is isolated from the fermentation broth of Streptomyces tsukubaenis [ 15 , 16 ]. INF-α2b is also equally effective as tacrolimus but has limited availability. Tacrolimus is a calcineurin inhibitor that binds FK506-binding proteins in T-lymphocytes. This leads to a decreased activity of T cell (type 1 and 2) cytokines, such as IL-2, interferon γ, IL-4, and IL-5. Tacrolimus also has a role in inhibiting histamine release from mast cells [ 17 ]. Conjunctival cytology of patients treated with tacrolimus revealed a reduction in inflammatory cells, most commonly eosinophils [ 18 ]. Tacrolimus was first used as an immune suppressant in the liver and solid organ transplants. It has also been used for treating certain skin conditions, such as atopic dermatitis and vitiligo. Although no high-risk complications have been observed, side effects, such as ocular burning, itching, and increased sensitivity to heat and light, have been reported. Tacrolimus has been used as effectively for the treatment of multiple conjunctival inflammatory conditions, such as VKC, atopic keratoconjunctivitis (AKC), giant papillary conjunctivitis, uveitis, Mooren’s ulcer, corneal graft rejection, blephrokeratoconjunctivitis, and chronic follicular conjunctivitis [ 15 , 19 , 20 ]. Although studies regarding the efficacy of tacrolimus have been reported in the literature, very limited work has been done regarding its long-term outcomes to the best of our knowledge. In addition, work has been done on ocular preparations of tacrolimus, which are not available in developing countries, especially Pakistan, a country with a high burden of VKC. Thus, we used 0.03% tacrolimus skin ointment and conducted a 24-month study to see its efficacy and safety in severe cases of VKC. The aim is to set the best available evidence in clinical practice and to assist in setting guidelines for the management of VKC in developing countries [ 21 ].
Materials and methods A prospective nonrandomized interventional study was conducted at the Department of Cornea and Refractive Surgery, Al-Shifa Trust Eye Hospital, Pakistan, from July 2020 to July 2022. The ethical approval was taken from the Ethical Review Committee of Al-Shifa Trust Eye Hospital (reference no.: ERC-67/AST-20), and all the procedures were conducted in accordance with the protocols of the Declaration of Helsinki. Patients with newly diagnosed or recurrent moderate to severe VKC were consecutively recruited into the study. The exclusion criteria included all other forms of allergic conjunctivitis, giant papillary conjunctivitis, history of subconjunctival, and systemic steroid use. Examination protocols All the participants on their baseline assessment were categorized by utilizing the 5-5-5 exacerbation scale (21). This system classifies all the features of VKC into three different categories (100-point scale, 10-point scale, and one-point scale) based on the severity of the disease. Baseline and follow-up examinations were done by a single clinician (WA). Written informed consent was obtained from all participants before their inclusion in the study. A thorough medical and ocular history was obtained at the time of baseline assessment, which was followed by a complete ocular examination. An initial grading was done based on the classification system described above. Afterward, Eczemus (tacrolimus 0.03%) skin ointment (Brookes Pharma Ltd., Karachi, Pakistan) was prescribed a frequency of two times a day depending upon the severity of the disease. Eczemus has been reported to be used effectively for ocular uses in previous literature [ 2 , 22 - 25 ]. No conjunctive therapy was provided to the participants other than artificial lubricants depending upon the ocular condition. Although the frequency of treatment was not altered depending upon the changes in the severity of the disease, it was continued for 12 months. Regular follow-ups were advised for all patients at regular intervals. However, data were collected on the follow-up at one month, three months, and afterward at every six months' interval after the initiation of therapy. Lastly, the blood profile along with renal and liver function tests was assessed at every follow-up after three months of initiation of therapy. In addition, the patient was thoroughly interviewed for any other side or adverse effects, which they have experienced during this therapy. Outcome variables The primary outcome of the study was the severity of VKC, which was assessed by calculating the final score based on the clinical signs observed at every follow-up. In addition to the total grading, the score of every category was also recorded. The relapse rate after reduction in the frequency of therapy was also assessed, which was calculated as the ratio of cases that showed a progression in the severity of the disease, leading to a 183-point increase in the total score to the total cases in which therapy was prescribed. Lastly, frequencies of side and adverse effects experienced by the participants and their severity were also assessed as the outcome of the study. Statistical analysis Complete analysis of data was conducted in IBM SPSS Statistics for Windows, version 21 (released 2012; IBM Corp., Armonk, New York, United States). Rigorous data cleaning was done before the final analysis. In the descriptive analysis, frequencies were described for all categorical variables. The continuous variables were presented descriptively by utilizing the mean along the standard deviation and total ranges. Repeated-measures analysis of variance (ANOVA) was used for the assessment of changes in scores in 5-5-5 grading with therapy. It was conducted at a confidence interval of 95%. A P value of <0.05 was considered as statistically significant.
Results A total of 70 eyes from 70 patients were included in the study. The mean age of the participants was 15.4±3.22 years ranging from eight to 22 years. The gender distribution showed that the majority of the individuals were male (61.40%, n= 43), and the rest were female. On the baseline assessment, the mean score was 203.17±102.05, which initially showed a slight increase after one month (206.10±122.22) but subsequently reduced at every follow-up with a score of 9.06±28.48 (range=0-101). A similar reduction was also noted for scores pertaining to different severity levels as the mean score in the 100-point category reduced from 178.57±100.57 (0-400) to 12.85±33.71 (range=0-100) at one-year post therapy examination, which further reduced to 7.14±25.93 (range=0-100) after another year (Table 1 ). It was also observed that the total mean scores of follow-up visits showed a statistically significant reduction in comparison to the baseline examination except for the visit at one month (p-value=1.00). Furthermore, a statistically significant reduction in mean scores between subsequent visits was observed until follow-up at six months, after which the stability of the scores was recorded as the difference in mean values was not statistically significant. The 10-point score along with others also showed a similar pattern of change (Table 2 ). This has also been shown in Figure 1 that tacrolimus equally reduced the features of VKC on every level of severity on the 5-5-5 scale. The complaint of a burning sensation was made by 10 patients (14.28%). None of the patients had any significant ocular and systemic complications in our study, and relapse after discontinuation of therapy was observed in 5.71% (n= 4) of the sample who were then started again on the same therapy.
Discussion VKC has a significant impact on the quality of life and daily activities of patients because symptoms can lead to a lack of sleep, which has an impact on outdoor activities and school dropouts. In addition, increased duration and severity of the disease have a substantial effect on the quality of life of adults and financial impact. Thus, such patients require psychological support to avoid overmedication, such as corticosteroids leading to decreased vision and under-use of the drug resulting in scarring and stem cell deficiency [ 3 ]. In developing countries, such as Pakistan, there is non-availability of an ocular topical preparation of tacrolimus, so considering the potential role of tacrolimus in controlling severe cases of VKC and the use of dermatological ointment in various studies [ 2 , 22 - 25 ] with similar beneficial effects as topical therapy, we also used tacrolimus skin ointment in order to prove its potent role in the stabilization of VKC. We evaluated the clinical outcome and safety of tacrolimus skin ointment (0.03%) in patients suffering from VKC. Our results revealed three important findings. First, the 12-month therapy with tacrolimus skin ointment 0.03% showed a reduction in clinical scores from 203.17±102.05 that was 206.10±122.21 at three weeks and improved significantly to 69.94±70.54 and 19.81±40.29 at three and four months, respectively. Second, tacrolimus skin ointment 0.03% has similar effects on reduction in scores of 100-, 10-, and one-point signs. This means that the drug can be given to eyes irrespective of the signs in the exacerbation grading scale, with a linear reduction pattern of 100-, 10-, and one-point signs. Third, no serious adverse effects were noted during our study period, which means that it is safe for the treatment of VKC. Severe VKC is considered a refractory form of ACDs, and this study would guide the treatment of such patients. This study demonstrates that a 12-month use of tacrolimus skin ointment 0.03% is effective in reducing all symptoms of VKC, including cobblestone papillae, and helps to maintain the stable stages of VKC, thereby improving the quality of life of patients. Almost all patients in our study showed dramatic improvements in clinical scores without developing any significant adverse effects. The only adverse effect was a burning sensation, which was reported by 10 patients. Only three patients were non-responders that were shifted to topical antihistamines and steroids. There were four patients that showed recurrence of disease one month after cessation of the tacrolimus therapy, which again responded when tacrolimus was started. In the study by Hirota et al., there was an increased remission rate at the 24th month with prolonged use of tacrolimus ophthalmic suspension (0.1%) [ 26 - 28 ]. This suggests that to achieve maximum response with therapy, it should be continued for two consecutive seasons in patients with severe symptoms. Considering the stable stages of VKC, further studies are needed to detect the role of the proactive immunosuppressant drug in low doses to prevent the recurrence of the disease. The effectiveness and safety of topical tacrolimus 0.1% over six months has been evaluated in a nationwide survey in Japan and included more than 1,000 patients with VKC and AKC, but topical steroids were given when needed [ 29 ]. Yazu et al. reported long-term improvement in clinical signs of severe and refractory VKC and AKC with tacrolimus, but the sample size was small [ 30 ]. The adverse effects of tacrolimus noted in our study and the literature are burning and stinging. Reactivation of herpes simplex viral keratitis was a concern because of the immunosuppressive effect of tacrolimus [ 20 ], but none was reported in our study and the literature. A recent meta-analysis included five studies and reported significantly lower ocular objective sign scores (standardized mean difference (SMD) −1.39, 95% CI −2.50 to −0.27; p < 0.05 and subjective symptom evaluation score (SMD −0.92, 95% CI −1.59 to −0.24; p< 0.05). There was high heterogeneity in this study because the control was not the same. Some research groups were given a placebo, while others were given cyclosporin, interferon α-2b, or tobramycin dexamethasone. The subjective symptom scores of the patients in the tacrolimus trial group (TAC) at the end of the treatment were significantly lower than those of the control group. There was a difference in drug concentrations (0.003%, 0.005%, 0.01%, and 0.1%), frequency of tacrolimus use across different studies, scoring indications, follow-up time, and treatment duration. This variation leads to difficulty in analyzing data and reduces the credibility of the meta-analysis. Shoughy et al. [ 30 ] used 0.01% topical tacrolimus, and the frequency was twice daily. However, when a concentration of 0.005% was used by Kheirkhah et al. [ 15 ], the frequency was four times daily with safety and efficacy in both studies, but the long-term outcome was not studied. In our study, tacrolimus skin ointment 0.03% was also used twice daily, which showed clinical efficacy and safety in the long term. The efficacy and safety of tacrolimus eye drop in different concentrations (0.003%, 0.005%, 0.01%, and 0.1%) and 0.03% skin ointment for the treatment of VKC have been reported. It is effective in reducing clinical signs and symptoms of severe VKC that are refractory to topical antihistamines and topical cyclosporin [ 7 - 11 ]. There are certain limitations in our study. First, it is a prospective study with a lack of a control group. Second, selection bias was possible as we considered only those patients who could be followed up for two years. Third, non-responders were excluded from the study, and there is a need to study those cases. Thus, further randomized control trials are needed to determine proper dosage, frequency of administration, and scoring indications in clinical studies to facilitate data analysis.
Conclusions It can be concluded that tacrolimus is effective in the long-term management of VKC and does not lead to the complications observed in steroid-based therapy. Irrespective of the severity of the disease, it can be prescribed, resulting in stable stages. With the recurrence of the disease, it can be prescribed to achieve reduction in clinical signs and symptoms. As VKC improves the overall quality of life of patients, protocols should be developed to incorporate tacrolimus into the mainstream management plan of VKC.
Background Vernal keratoconjunctivitis (VKC) is an allergic conjunctival inflammation with severe ocular complications if left untreated. The current management regimen is plagued with adverse effects, long-term problems, and clinical relapses. Tacrolimus offers an alternative treatment option, and long-term studies are needed to determine its efficacy. Methods A two-year follow-up based study was conducted on moderate to severe VKC patients, who were prescribed tacrolimus skin ointment. The 5-5-5 exacerbation scale was used for the monitoring and grading severity of the disease. Analysis of variance (ANOVA) and intergroup comparisons were conducted on exacerbation scale scores among follow-ups. Results A significant reduction was observed in the total score of severity from baseline (203.17±102.05) to three months' follow-up (69.94±70.54), and it kept reducing for 18 months post therapy. Similar results with statistically significant reduction were observed for all grades of the scale. The relapse rate was 5.71% within a month after therapy cessation, and none of the other patients showed relapse afterward. No significant ocular and systemic complications were observed during the study. Conclusion Tacrolimus is effective in the long-term management of VKC without the complications of conventional steroid-based therapy.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50579
oa_package/ab/8c/PMC10788690.tar.gz
PMC10788691
38226099
Introduction Retinoblastoma (RB) is a malignant tumor that develops from the immature cells of the retina [ 1 ]. It is the most common intraocular cancer of childhood affecting approximately 1 in 15,000-20,000 births, with an incidence of 7000-8000 new cases worldwide and 4000 deaths annually [ 2 ]. It is a common cause of blindness, morbidity and mortality particularly in the underdeveloped countries of sub-Saharan Africa [ 3 ]. Regions with the greatest prevalence have the highest mortality with up to 70% mortality in Asia and Africa, compared with 3-5% in Europe, Canada and the USA [ 4 ]. In Uganda, RB is the fifth most common cancer after lymphomas, Kaposi sarcoma, leukemia and nephroblastoma [ 5 ]. The survival of RB patients in Africa, Uganda in particular, is low largely because of delayed presentation [ 6 ]. The advanced stage of disease is found to be associated with very poor outcomes. Survival majorly depends on the severity of disease at presentation. Survival rates in the UK and USA approach 100% with survival in other countries, primarily developing nations, much lower. Survival rates have been reported to be 80-89% in developed Latin American countries, 48% in India and as low as 20-46% in Africa. In Uganda, survival rose from 45% in the pre-chemotherapy era to 65% in the post chemotherapy era [ 6 ]. A worldwide issue is poor access to comprehensive RB pathology [ 4 ]. Histological examination of the enucleated globes in the region has also been inconsistent, as shown in studies done in Uganda and Kenya [ 7 , 8 ]. RB management not informed by histological examination could impede development of a rational management plan and lead to unsatisfactory clinical outcomes. There is a paucity of data on the association between clinical and histopathological features and survival so as to guide appropriate RB management and ultimately improve survival. This study intends to address this gap.
Materials and methods Study design and site This was a retrospective study carried out at two health facilities in southwestern Uganda. Mbarara Regional Referral Hospital (MRRH) is a tertiary hospital with a 350-bed capacity, which is government-funded through the Ministry of Health intended to provide free service and covers the districts of the southwestern region of the country. It also receives patients from the neighboring nations of the Democratic Republic of Congo (DRC), Rwanda and Tanzania. It is a teaching hospital for Mbarara University of Science and Technology (MUST) and other health training institutions in the region. Another study site was Ruharo Eye Centre (REC). REC is one of the referral centers for RB cases in the country and receives over 90 cases of RB annually. The enucleated specimens were taken to MUST histopathology laboratory in the MUST pathology department, which is the only government-aided histopathology laboratory in the southern and western parts of Uganda. The MUST pathology department is a referral unit for cases that require histologic diagnosis in the region. Study variables Clinical features were retrieved from archived records. These included leukocoria, strabismus, proptosis, uveitis, cataract, staphyloma, phthisis, laterality and treatment. Clinical outcome data were recorded as either alive or dead. The staging was performed using the American Joint Committee on Cancer (AJCC) 8th tumor, node, metastasis (TNM8) staging system staging system as pT1, PT2, pT3 and pT4. The following histologic features were noted: growth patterns as exophytic, endophytic and mixed; invasion of lens, conjunctiva and corneal epithelium; invasion of anterior segment structures as present or absent (iris, ciliary body and trabecular meshwork); necrosis as none, mild (involving less than 25%), moderate (25-50%) and extensive (more than 50%); calcification as none, mild (involving less than 25%), moderate (25-50%) and extensive (more than 50%); and Flexner-Wintersteiner rosettes as mild (0-25%), moderate (25- 50%) and many (more than 50%). The well-differentiated tumors were those constituting more than 50% of the rosettes, moderately differentiated as those less than 50% rosettes and poorly differentiated as those without any rosettes; Homer-Wright rosettes as absent or present; mitosis as present or absent; and presence of inflammation as chronic (lymphohistiocytic) or acute inflammation. Data analysis Stata Statistical Software release 13 (2013, StataCorp. LLC, College Station, Texas, USA) was used for the analysis. Baseline participants’ characteristics were described using appropriate summary statistics, which are the mean or median for continuous variables and proportions for categorical variables. The histopathological stages of RB among children were also presented as proportions. Survival after two years in care was computed as a cumulative measure and expressed as a proportion of all children still alive by two years out of all that were admitted with RB at REC. The corresponding 95% confidence interval (CI) was also reported. Independent variables included sociodemographic factors, such as age, gender and geographical region of residence, in-hospital care, histopathologic features of the tumor and clinical presentation of the children. Unadjusted risk ratios (RRs) were reported together with their corresponding 95% CI. A significance level of 5% was used. All independent variables with p<0.1 were included in the multivariate model building using a manual backward-stepwise selection method. Variables that lost their association with survival at two years were excluded from the final multivariate model. In addition, variables that could not allow for the convergence of the model were excluded. For all variables in the final model, adjusted RRs were presented with their corresponding 95% CIs.
Results We included 78 eye specimens in the study. As shown in Table 1 , the median age of diagnosis was 31 months, and most of the participants were between 12 and 59 months (78.4%, n=58). Majority were males (55.1%, n=43), and most of them originated from western Uganda (33.3%, n=26). In most cases, only one eye was affected at the time of diagnosis (70.5%, n=55). The most common clinical sign was leukocoria (69.2%, n=52), which was followed by proptosis (32.1%, n=25), uveitis (16.7%, n=13), phthisis (12.8%, n=10), buphthalmos (9%, n=7), staphyloma (5.1%, n=4), cataracts (1.3%, n=1) and lastly strabismus (1.3%, n=1). The most common pathologic stage was stage 1 (41.0%, n=32), followed by stage 4 (26.9%, n=21), stage 2 (26.9%, n=21) and lastly stage 3 (5.1%, n=4) (Figures 1 , 2 ). Choroidal invasion was seen in 29.5% (n=23) of the specimens, and more than half of these were massive (Figure 3 ). Optic nerve (ON) invasion was seen in 38.5% (n=30) cases (Figure 4 ), with almost half of these having invasion to the surgical end/margin. Orbital extension was seen in 16.7% (n=13) cases, while scleral invasion was seen in a paltry 7.7% (n=6). Iris, trabecular meshwork and ciliary body invasion accounted for 16.7% (n=13) each. Lens, corneal, conjunctival and vascular invasion accounted for a combined paltry 7.7% (n=6). Endophytic tumor was seen in 71.8% (n=56) cases (Figure 5 ). Flexner-Wintersteiner rosettes were seen in 34.6% (n=27) cases, while Homer-Wright rosettes in only 6.4% (n=5) cases. Necrosis was seen in 71.8% (n=56) cases, calcification was seen in 41% (n=32) cases, and mitoses were seen in only 9% (n=7) cases (Table 2 ). The two-year survival was estimated to be 61.5% (n=48), as shown in Table 3 . At univariate analysis, gender, region of origin, leukocoria, proptosis, cataract, choroidal invasion, orbital invasion and necrosis were significant factors in predicting survival. Age, laterality, chemoreduction, buphthalmos, growth pattern, calcification, vascularity, mitosis and differentiation were not significant. Scleral invasion and anterior segment invasion were not able to produce an RR because the alive category had zeroes (Table 4 ). Female gender, leukocoria, proptosis, choroidal invasion, ON invasion orbital invasion, region, cataract and necrosis were run in the final model; however, region, cataract, choroidal invasion, proptosis and necrosis would not bring convergence, so they were eliminated from the final model. Female gender, leukocoria, orbital extension and ON invasion were significant predictors of survival, with females being able to survive 1.4 times better than males and patients without leukocoria being able to survive 1.1 times better than those without. Patients without ON invasion will survive better than those with ON invasion depending on the degree of invasion, and patients without orbital extension will be able to survive seven times better than those with orbital extension (Table 5 ).
Discussion The median age of the patients at presentation was 31 months, which is comparable to a study at a tertiary center in Kinshasa, Democratic Republic of Congo, that found the median age at 32 months and 29 months in an Indian population. However, this is not comparable to studies done in developed countries that reported a median age of 12 months, such as in the UK. This is attributed to the late presentation and delayed diagnosis of these cases in our setting and other developing countries [ 9 ]. The prevalence of males and females varies widely from studies. Males accounted for 55.1% (n=43), which was comparable to a Kenyan study with 54% and other studies in both developed and developing countries, such as Turkey and Pakistan [ 8 ]. However, studies done in Malaysia and Nigeria have reported a higher prevalence of females. This could be attributed to the genetic differences in the different populations and referral selection due to differences in cultural beliefs. The most common age groups involved was the 12-59 months (78.4%, n=58). Delayed diagnosis is commonly encountered in developing countries with 90% of cases diagnosed before the age of five years as evidenced by a study done in Cameroon, which is consistent with our study at 91.2% and 85% in Nigeria [ 10 ]. On the contrary, most children diagnosed with RB in developed countries are less than 24 months old because of the early presentation and diagnosis [ 1 ]. The delayed diagnosis in developing countries is due to the lack of awareness and poor accessibility to referral/tertiary centers where these patients can be ably managed [ 9 ]. Unilateral cases were 70.5% (n=55), and this is comparable to many studies that reported unilateral cases to be 72%, 74% and 71.2% in Kenya, Uganda and Pakistan, respectively [ 7 , 8 ]. Sub-Saharan Africa studies have found 11% to 33% of patients with RB to have bilateral disease as seen in a study done in Republic of Côte d’Ivoire and the Democratic Republic of the Congo, which is in keeping with our study [ 11 ]. Leukocoria (69.2%, n=52) and proptosis (37.2%, n=25)) were the most common presenting symptoms. This was comparable to a study done at Kenyatta Hospital in Kenya at 71% and 37%, respectively [ 8 ]. Moreover, leukocoria was the most common presenting sign in Republic of Côte d’Ivoire and the Democratic Republic of the Congo [ 11 , 12 ]. However, this is different from data in middle-income to upper-income countries who present with leukocoria and strabismus as the most common signs, such as in Egypt and in the UK [ 13 ]. This is because symptoms, such as proptosis symptoms, are signs of advanced RB and present when there is most likely an orbital extension. Intraocular tumours (stages pT1-3) constituted 73.1% (n=57), while extraocular tumours (stage pT4) were 26.9% (n=21). This was comparable to a study done in India that found intraocular tumours and extraocular tumours to be 72.3% and 27.7%, respectively [ 9 ]. PT4 was our second most common stage, which was consistent with the findings of a study done in India that showed pT1 of 48.1% and pT4 of 26% [ 14 ]. However, this is lower than the percentage found in a study done at REC, Uganda, that showed that almost half the tumours were extraocular (46%). This is due to the introduction of an effective safe chemotherapy regimen in Uganda, which presumably reduced the progression of disease to advanced stages [ 6 ]. An endophytic pattern was seen in 71.8% (n=56), an exophytic pattern was seen in 17.9% (n=14) and a mixed pattern in 10.3% (n=8). The incidence of growth patterns varies widely with the endophytic and mixed types being more predominant. This difference could be attributed to a difference in the biologic nature of these tumors [ 14 ]. Choroidal invasion was seen in 29.5% (n=23), which is comparable to that of Shields et al. (1993) of 23%. Although the incidence of choroidal invasion varies greatly in various reported series, ranging from 15.2% to 62%, it is lower than findings in other developing countries, such as in India with 47.4% [ 15 ]. This has been attributed to the limited peripheral calottes that were taken during the sectioning. However, massive choroidal invasion was seen in 12.8% cases, which is comparable to the 18% seen in Jordan, but it was still lower than that from other studies, such as in India at 24.6% because of the limited sectioning [ 15 ]. Reports indicate that 24% to 45% of eyes have a degree of ON invasion. ON invasion was seen in 38.5% (n=30), which was comparable with those in the study in America (38.7%) and India (32%) [ 16 ]. However, it was higher than that in a much earlier study in the USA, probably reflecting the advanced stage of the tumours in our study [ 17 ]. Retrolaminar ON invasion was seen in 3.9% (n=3) cases, which is comparable to that in Shields et al. (1994) of 5.5%; however, studies from developing countries have shown a higher percentage, such as Gupta et al. (2009) with 17% and Eagle (2009) with 10.4% [ 18 , 19 ]. Invasion of the resected margin of the ON was seen in 16.7% (n=13) cases. This was comparable to other studies in the developing world, but this is higher than that in developed countries, such as in the USA with 1% [ 17 ]. Scleral invasion was seen in 7.7% (n=6) cases, which was in keeping with other studies, such as that in Argentina with 8.8% and Pakistan with 7% [ 17 ]. Orbital extension was seen in 16.7% (n=13) cases, which is comparable to 18% in an Indian study [ 20 ]. This is due to the advanced stages of the tumours in developing countries. Invasion of the iris was seen in 16.7% (n=13) cases, which is in keeping with a study done in India with 10.7% [ 15 ]. The higher incidence of these risk factors in developing world might be related to later presentation (more advanced stage) in relation to the lower socioeconomic status and the delay in seeking and getting treatment [ 7 ]. Tumour differentiation is highly variable between different reports from the developing world. Many tumours showed a higher incidence of poorly differentiated (up to 80%) compared to well-differentiated tumours, which was comparable to our study of 65% (n=51) probably reflecting the late age of presentation of undifferentiated tumours [ 15 ]. Generally, necrosis was seen in 71.8% (n=56) cases, which was higher than in most studies because of the fact that most cases had undergone chemoreduction before enucleation in our study as compared to other studies that examined primarily enucleated eyes; however, extensive necrosis seen is 33.3% (n=26) was comparable to 31% of an Indian study [ 15 ]. Calcification in RB is a frequent histologic finding with a reported frequency between 40% and 95%, although the subject has not been studied in depth. Our study found calcification in 41% (n=32) cases, which is similar to the 48% in Malaysia, but this is lower than that seen in Israel at 84% [ 21 ]. Two-year survival was estimated to be 61.5% (n=48), and this was comparable to a study done in Taiwan at 64.4% [ 22 ]. This is higher than those of surrounding countries, such as in Kenya with 22.6% and in Tanzania with 23%, and this difference is assumed to be due to the development of an effective safe chemotherapy regimen in Uganda [ 6 , 8 ]. However, this is lower than those in developed countries, such as the UK where the survival rate is estimated at 95%. The poorer survival in low- and middle-income countries (LMICs) is attributed to a combination of many factors, including diagnostic delays resulting in advanced stage of disease at presentation, lack of availability of chemotherapeutic agents, cost of treatment leading to abandonment of care and limited access to surgery and radiotherapy [ 8 ]. Age was not found to be a predictor in our study as it is in many studies, but studies in India and Singapore have shown that age less than 24 months is a significant predictor, with children being able to survive most likely because children who present at a younger age may have tumours diagnosed at earlier stages of the disease [ 15 ]. Our study showed that sex had a significant influence on survival (RR 1.4), with females having a 1.4-fold chance of survival compared to males. Although most studies have not shown any difference in survival between males and females with RB, many studies have shown that females have a better cancer survival than males [ 23 ]. Although environmental and hormonal factors have been implicated in adulthood cancer, genetics have been thought to be the most common cause for this difference in childhood cancers as evidenced in some studies [ 24 ]. Most studies have shown that people with leukocoria have a better survival (RR 1.1), with our study showing that these people are almost 1.1 times more likely to be alive at two years than those without leukocoria. This is because leukocoria can easily be seen as an abnormal sign, and hence patients will present when the tumor is less advanced [ 8 ]. Most studies from developing countries, such as in Kenya, have shown that proptosis is associated with a poor survival as it is a sign of more advanced diseases, and this was consistent with this study that showed a significant association; however, this was not included in the final model as it would not achieve convergency at a multivariate analysis [ 8 ]. Cataract was a significant predictor at the univariate analysis but lost significance at the multivariate analysis. Cataract was a significant factor on the univariate analysis, and this is in keeping with some studies in India and the USA, which show that orbital cellulitis, phthisis bulbi, staphyloma and cataract are clinical predictors of high-risk pathology [ 15 , 25 ]. The rate of survival in patients with ON invasion depends on the degree of ON invasion. Survival rates increase as the degree of invasion reduces, and this was evidenced at the univariate analysis, where absent ON invasion, prelaminar ON and intralaminar ON invasion was significant [ 14 ]. However, at the multivariate analysis, there was statistical significance for only intralaminar ON probably due to fewer numbers. Scleral invasion has been shown to be an independent factor of survival in most studies; however, this study did not provide an RR as there was no one in the alive group [ 14 , 26 ]. Choroidal invasion was seen to be significant in the univariate analysis but could not be included in the multivariate model because it could not allow for convergence, although mortality is higher in those with massive choroidal invasion. Survival in patients with anterior segment invasion has been shown to be low, but our analysis could not produce RRs as there was no one in the exposure group, although mortality associated with these patients was very high. Our study showed that orbital extension was significantly predictive of survival. Patients without orbital extension are able to survive seven times better than those with orbital extension. Orbital extension is a major cause of death in children with RB in developing countries, with mortalities of up to 100%. The presence of orbital invasion was associated with a 10-27 times higher risk of systemic metastasis as compared to cases without orbital invasion. This is in agreement with studies done in the USA and India [ 21 ]. Necrosis was a significant factor at the univariate analysis, although it lost significance at the multivariate analysis. Most studies including ours have not found necrosis to be associated with survival; however, a study done in the USA showed that patients with extensive necrosis are associated with high-risk pathology and mortality [ 27 ]. Our study did not show that differentiation is a predictive factor as it is in many other studies, but a study in Jordan has shown that poorly differentiated tumors are associated with more advanced tumor pathology [ 28 ]. Growth type has not been associated with survival although a study in Jordan illustrated that the mixed type independently affects survival as it would be found in advanced tumors [ 29 ]. Calcification was not shown to be a significant factor of survival, nevertheless no studies have been done to see its impact on survival. Strengths and limitations To the best of our knowledge, this was the first study in Uganda to extensively study the histopathological features of RB. Our study followed a retrospective study design that could have potentially limited access to some clinical information or outcome. The smaller sample size used in this study limits the generalizability of these results, and therefore we recommend studies using larger sample sizes. The majority of the histopathologic traits could not be compared to prior research conducted in the area.
Conclusions Leukocoria and proptosis are the most common clinical signs of RB. The dominant pathologic stage is stage 1, although late presentation (stage 4) is also common. Survival is still low, but it is higher than neighboring countries. Leukocoria, optic nerve invasion, orbital extension and gender were the significant factors predictive of survival in patients at Mbarara Regional Referral Hospital. Sensitization of health workers and community on the identification and referral of any child with leukocoria to improve histopathology reporting in order and to identify patients at risk is highly recommended.
Background: Retinoblastoma (RB) is a malignant tumour that develops from the immature cells of the retina. It is the most frequent type of paediatric intraocular cancer and is curable. Clinical and histological findings after enucleation of the affected eye dictate not only the patient's secondary care but also their prognosis. We assessed the clinical and histopathologic predictors of survival among children with RB from two tertiary health facilities in Uganda. Methods: This retrospective research utilized archived formalin fixed and paraffin-embedded blocks of eye specimens enucleated between 2014 and 2016 at Mbarara University of Science and Technology (MUST) Pathology Department and Ruharo Eye Centre (REC) in Mbarara, Uganda. The specimens were then processed and stained with haematoxylin and eosin. The confirmation of RB was made to include the histologic stage and features of the tumor. Biographic data of the patients and clinical features, such as leukocoria, proptosis, phthisis, staphyloma and buphthalmos, were retrieved from the records. Results: Males (55.1%, n=43) dominated the study population (N=78). The median age was 31 months. The most common clinical sign was leukocoria (69.2%, n=52), and the most predominant histopathological stage was stage 1 (41%, n=32). Optic nerve (ON) invasion was seen in 38.5% (n=30), choroidal invasion in 29.5% (n=23), scleral invasion in 7.7% (n=6) and orbital extension in 16.7% (n=13) of the cases. Flexner-Wintersteiner rosettes were seen in 34.6% (n=27). Necrosis was a prominent feature (71.8%, n=56). The two-year survival was estimated to be 61.5% (n=48). Leukocoria (risk ratio (RR) 1.1), female gender (RR 1.4), intralaminar ON invasion (RR 7.6) and a lack of orbital extension (RR 7) were significant predictors of survival. Conclusion: Leukocoria and proptosis are noticeable clinical signs of RB. Most patients present while in stage one although stage four presentation is also common. Leukocoria, ON invasion, orbital extension and gender are significant factors predictive of survival in patients with RB.
The authors acknowledge the staff and administrative management of the two tertiary hospitals for supporting this research.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50605
oa_package/d9/d9/PMC10788691.tar.gz
PMC10788695
38226086
Introduction and background Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease that affects the motor neurons with upper and lower motor neuron manifestations, in addition to extra-motor manifestations. It is divided into two phenotypes: a spinal type that usually starts as focal muscle weakness and wasting that spreads with disease progression, and a bulbar phenotype that presents with dysarthria, dysphonia, and dysphagia. The most commonly reported extra-motor manifestation is a change in cognition, sleep disorders, autonomic disturbance, and skin elasticity loss. ALS is divided into sporadic cases, which account for most of the cases, and familial cases. ALS typically manifests in the old population, with familial cases having a lower age of onset [ 1 - 4 ]. Oral manifestations reported in ALS include increased salivation, trismus, and dysphagia, all of which have implications for maintaining good oral hygiene and providing dental care for this group of individuals. The average annual incidence of ALS varies, with most studies reporting higher incidence in men than women, with a peak incidence at 70-79 years of age. Wolfson et al. reported that incidence ranges across different countries. It is estimated to be 0.26 per 100,000 in Ecuador to 23.46 per 100,000 in Japan. Prevalence also ranged from 1.57 per 100,000 in Iran to 11.80 per 100,000 in the United States of America [ 5 ]. Age of onset varies depending on the type with sporadic cases manifesting in the sixth to seventh decade, while the familial form occurs at a younger age. Males have a higher risk of developing sporadic limb onset ALS compared to females [ 1 ]. This short review will briefly discuss ALS, emphasizing oral manifestations and dental management considerations.
Conclusions This study covered a broad range of information about amyotrophic sclerosis and the challenges they face in maintaining good oral health. It is also very challenging for the dental practitioner to deal with the complications associated with this disease. A collaboration between the dental team and caregivers is important to maintain adequate oral hygiene and reduce the chances of developing dental complications.
Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease that affects the upper and lower motor neurons with upper and lower motor neuron manifestations. It is divided into two variants: a spinal onset and a bulbar onset. The first starts as focal muscle weakness and wasting that spreads with disease progression, while the second phenotype presents with dysarthria, dysphonia, and dysphagia. Moreover, an extra-motor manifestation could be reported with the most commonly reported symptoms being the change in cognition and sleep disorder. Oral manifestations include increased salivation, limited mouth opening, and dysphagia. Patients with ALS have difficulty maintaining oral hygiene, and it is important for the practitioner and the caregiver to take care of this group of population. We herein provide a short review of the disease with a focus on the oral manifestations and dental considerations for management for this group.
Review Pathophysiology and etiology ALS is a progressive neurodegenerative disorder that is caused by the interplay of a myriad of epigenetic, environmental factors, and genetic factors with more than 20 identified genes to date [ 1 ]. The neurophysiological techniques identified the cortical hyperexcitability in the disease pathogenesis, and it could be used as a novel diagnostic marker as it can differentiate ALS reliably from the mimickers. ALS is characterized by neuromuscular connection loss, axonal retraction, upper motor neurons, and lower motor neuron cell death [ 1 ]. Dysfunction of the astrocytic excitatory amino acid transporter 2 (EAAT2) leads to reduced glutamate uptake from the synaptic cleft, leading to glutamate excitotoxicity leading to neurodegeneration through activation of calcium-dependent enzymatic pathways. It increased oxidative stress due to the superoxide dismutase-1 (SOD-1) gene mutation, which induced mitochondrial dysfunction and defective axonal transportation [ 6 , 7 ]. Clinical features It is divided into two phenotypes: a limb-onset ALS with upper and lower motor neuron signs in the limb and a bulbar onset ALS characterized by speech and swallowing difficulties, followed by limb weakness in the late stages [ 2 ]. Limb onset is characterized by progressive muscle weakness with a focal onset that spreads to adjacent body regions. It usually starts in the limb muscles, most affecting the distal muscles first with the patient perceiving a slight weakness in the distal part of the limb that progresses and spreads to the adjacent part of the affected limb. Subsequently, the disease progresses to the opposite limb. Muscle atrophy, muscle cramps, and stiffness accompany the weakness [ 1 ]. Bulbar onset is characterized by dysarthria and dysphagia, followed by limb weakness. Neurological disorders are very common in this group of patients, with depression, dementia, Parkinson’s, and epilepsy found more commonly than the normal population [ 2 ]. Sialorrhea is seen in the bulbar-onset ALS due to difficulty swallowing saliva and weakness of the facial muscles from the upper motor neuron damage, which leads to difficulty maintaining the lip seal and blow cheek [ 1 , 3 ]. Medical management Management of ALS is mainly symptomatic, with multiple caregivers involved. Patients with ALS patients commonly suffer from chronic respiratory failure due to weakness of the diaphragmatic and intercostal muscles. It is managed initially with chest physiotherapy and frequent suctioning. As weakness progresses, tracheostomy, chronic ventilatory support, and noninvasive positive pressure ventilation are employed in management [ 3 ]. Masticatory and swallowing muscle weakness leads to dysphagia that can result in weight loss. In the early stages, dysphagia is managed with diet modifications and safe swallowing techniques; however, at late stages, with the increased risk of aspiration, enteral nutrition is considered [ 3 ]. Dysarthria has no active cure with little benefits gained from speech therapy, and this can be frustrating for the patient. It is usually managed with symptomatic and compensatory strategies that can help communicate and improve patients' quality of life. The patient can move from oral to written communication, use an augmentative communication device, or via another person [ 8 , 9 ]. Painful muscle spasm is managed with mexiletine or levetiracetam [ 10 ]. Botulinum toxin injections into the spastic muscles can be used if oral therapy is not effective [ 3 ]. Regarding sialorrhea, it is managed with anticholinergic medication, including amitriptyline and glycopyrronium bromide [ 2 ]. One study assessed the effectiveness of radiotherapy for sialorrhea and found a reduction in sialorrhea in 78.6% of patients with ALS who failed pharmacological agents [ 11 ]. A recent Cochrane review concluded that there is a low-certainty to moderate-certainty evidence for the use of botulinum toxin B injections to salivary glands and moderate-certainty evidence for the use of oral dextromethorphan with quinidine for the treatment of sialorrhea in motor neuron disease [ 12 ]. Physical therapy could help in slowing down neuromuscular degeneration and improve daily activity for ALS patients. Patients also should be provided with assistive devices as the disease progresses, including neck collars, ankle foot orthosis, canes, crutches, and a wheelchair [ 2 , 3 ]. Riluzole remains the only approved disease‐modifying drug. It has anti-glutamatergic effects and prolongs the mean patient survival by three to six months with the most commonly reported side effects, including nausea, diarrhea, fatigue, dizziness, and liver problems. More recently, the free radical scavenger edaravone has been used for ALS with promising results [ 1 ]. Dental management A dental practitioner has to develop a communication method with the patient, as patients with advanced disease cannot communicate well. This can be solved via the caregiver, written communication, or external augmentation devices. Patients with advanced disease cannot perform oral hygiene, and a nursing staff or a guardian usually carries it out. A cheek retractor could be helpful in order to facilitate accessibility to teeth and mouth during oral hygiene practice. Stabilization of the lower jaw by a dental shield is recommended for bite support to relieve the fatigue of the jaw muscles and to prevent biting on the caregiver's fingers. A tongue scraper is recommended to remove the excess debris and to manage the coated tongue. Chlorhexidine mouthwash is also recommended to reduce bacterial load and prevent periodontal disease. Dental management should be carried out with a soft cushion to minimize the pressure applied on the back and help obtain a relaxed position during dental treatment. Additionally, a mouth gag can aid in cleaning and during dental treatment to facilitate mouth opening and provide access during dental work [ 13 , 14 ]. A study in the Netherlands found that most patients with ALS were not satisfied with their daily oral care [ 15 ]. Thus, the caregiver and the clinician must discuss the best possible strategies for dental care. A rubber dam and high-volume suction and saliva ejectors are necessary for dental treatment, as this cohort of patients has difficulty swallowing and increased salivation. It is also possible to give anticholinergic medication to reduce salivation. As the patients have limited mouth openings, a mini-head dental handpiece helps provide dental treatment in the posterior teeth. Patients are instructed to apply the mouth-opening exercise regimen, physiotherapy, and TheraBite jaw motion rehabilitation system [ 16 ]. Oral manifestations Oral manifestations for ALS patients include sialorrhea, and it is seen predominantly in patients with a bulbar form of the disease. It could be related to tongue spasticity, weakness in facial muscles, and buccal incompetence of buccal muscles. Another salivary complaint is the retention of thick, viscous saliva. The risk of hypersalivation includes the development of angular cheilitis, difficulty in speaking, sleep disturbance, and increased risk for aspiration. The combination of increased saliva and weakness of the tongue and respiratory muscles could lead to aspiration pneumonia. Interestingly, patients with sialorrhea were associated with poor oral status, amount of tongue coating, and increased gingival inflammation. However, another study concluded that increased salivation was associated with lower gingival inflammation and less risk of dental caries, and it was attributed to the buffering and bactericidal effect of saliva [ 16 ]. Muscle weakness can result in dysphagia that leads to food debris staying in the mouth, which subsequently leads to the promotion of periodontal diseases [ 17 - 19 ]. Additionally, many reports described cases with macroglossia, atrophic tongue, fasciculation in the tongue, masticatory muscle pain, and progressive limitation of mouth opening [ 14 , 17 , 20 ]. Macroglossia tends to increase in prevalence with the progression of the disease, and it has a negative impact on communication [ 21 - 23 ]. Maximum mouth opening is reduced in patients with ALS. This could result in difficulty in maintaining oral hygiene and providing dental treatment. Recommendations (1) The dental team has to discuss oral hygiene strategies that fit the patient the most with the patient and the caregiver. (2) Jaw exercises must be implemented to reduce the progression of limited mouth opening. (3) The patient has to be placed on a strict follow-up protocol to mitigate the dental complications as early as possible.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50602
oa_package/42/48/PMC10788695.tar.gz
PMC10788696
38226122
Introduction Cystic lymphangioma is also referred to as cystic hygroma (CH). CH are benign, non-cancerous tumors arising from the lymphatic system, which develop in the neck. They can also develop following trauma or infection [ 1 ]. They can manifest themselves in a variety of locations, including the spermatic cord level, the mediastinum, the breast, the cervical area, the abdomen, the inguinal region, and the spleen [ 2 ]. Approximately one out of every 2000 to 4000 live births have lymphangiomas [ 3 ]. The majority of CH cases are seen in children under the age of two. They are extremely uncommon in adults and are likely to be caused by the growth of lymphatic capillaries in reaction to head and neck infections or trauma [ 4 ]. The lesion is typically unilateral and doughy to the touch [ 5 ]. The posterior region is where neck CHs are most frequently discovered, whereas inflammatory, metastatic adenopathies or lymphoproliferative disorders are the most typical pathologies [ 6 ]. In areas where it compresses nearby tissue, symptoms may appear. There could be obstructive symptoms including airway blockage, dysphonia, and dysphagia [ 7 ]. Cystic lymphangiomas can be classified as congenital or acquired. Lymphatic channels are improperly connected to the main drainage ducts, resulting in congenital lymphangiomas. When previously normal lymphatic pathways are disrupted as a result of surgery, trauma, cancer, or radiation therapy, acquired lymphangiomas develop [ 8 ]. The diagnosis of adults is thought to be more difficult than that of children, and the ultimate diagnosis is typically dependent on post-operative histology [ 8 ]. Although there are numerous treatment options for lymphangiomas, surgical excision is the most popular one. Other options include radiotherapy, sclerotherapy, cryotherapy, electrical stimulation, steroid therapy, and the use of laser therapy [ 9 ]. There is evidence to report the effect of physiotherapy treatment protocols on CH survivors during their cancer treatment [ 10 ]. Physical therapy, which uses manual techniques, exercise regimens, and electrotherapeutic modalities, is used to treat joints which are having movement restrictions [ 11 ].
Discussion Intervention such as cryotherapy at -110 o C for four minutes and active-assisted exercises of the neck and shoulder was given progress to active resisted and lastly progress to active range of motion (AROM), strengthening exercises of cervical and shoulder musculatures starting with isometrics with the hold of 5 seconds and progress to 10 seconds of hold. Stretching of the trapezius muscle for 30 seconds for 3 sets, and breathing exercises to avoid chest complications after surgery were used to gain recovery for the patient. At the end of two weeks of physical therapy rehabilitation, there was an improvement seen in the patient. Karadag et al. reported the successful use of a pulsed dye laser and cryotherapy combination in the treatment of two cases of lymphangioma [ 12 ]. After using cryotherapy in our case scenario, we too achieved results. According to Baggi et al., joint distraction and oscillations from AROM exercises lead to early joint mobilization, reduced discomfort, and increased nutrition supply to the joint area [ 13 ]. Following its use in our case report, we also found the AROM exercise to be helpful. Singhavi et al. reported the successful use of shoulder physical therapy which includes active and assisted ROM, strengthening of the scapular elevators, neuromuscular retraining of the shoulder girdle muscles, passive stretching of the trapezius helps in regaining the shoulder movements, and shoulder musculature strengthening [ 14 ]. After using the above exercises we have gained successful results in our case scenario. Shanmugasundaram and Dhanasekaran used deep breathing exercises and incentive spirometry to avoid chest complications in a head-neck cancer patient in their study [ 15 ]. We also used this exercise in our case report and found it to be useful.
Conclusions This case study demonstrates how physiotherapy can help patients with cystic hygroma regain their independence. The patient has had CH and has a post-operative complication as a result of its excision. Physical therapy helps in regaining the range of motion of the cervical and shoulder, reducing pain, to improving the strength of cervical and shoulder musculatures. The entirety of the treatment can be given during the rehabilitation program.
Birth abnormalities affecting the lymphatic system include cystic lymphangiomas. They are rare in adults and typically happen in childhood. The cause of adult cystic hygroma (CH), which has a benign nature, is yet unknown. Seventy-five percent of lymphatic malformations have a CH as their primary site of origin in the head and neck area. We describe a 36-year-old female case of cervical cystic lymphangioma who complained of swelling on the left side of her neck for two years. There was no prior history of fever, trauma, weight loss, appetite loss, discharge, or swallowing difficulties. The doctor advised investigations like computed tomography neck, ultrasound sonography neck, etc., and was diagnosed with cystic lymphangioma. Early physiotherapy seems beneficial in preserving shoulder movement and minimizing pain in individuals. Cryotherapy is useful in treating patients with lymphangioma after surgery to reduce pain and swelling. This clinical case study demonstrates how patients with cystic lymphangiomas can benefit from physical treatment and regain their functional independence.
Case presentation A 36-year-old female reported to the hospital with the primary complaint that her neck was swollen on the left side for two years. When the patient first observed a swelling over the left side of her neck two years ago, it had an insidious onset and was gradually worsening in character. There was no previous history of temperature, trauma, loss of weight, decreased appetite, discharge, or swallowing difficulties. The patient was a known case of tuberculosis six years back which was confirmed through the Mantoux test and was managed for the same. On inspection, sutures were seen (Figure 1 ), and on palpation, firm swelling was present and tenderness was grade 2 according to the tenderness grading scale that is patient winces due to pain (the grades of tenderness are as follows grade 0 is no tenderness, grade 1 is patient complains of pain, grade 2 is patient complains of pain and winces, grade 3 is patient winces and withdraws the hand and grade 4 is the patient does not allow to touch the affected part). On local examination, 8 X 6 cm swelling was present on the anterior surface of the neck in the middle and lower 1/3 towards the left of the midline. There were no dilated veins, discharge, or sinuses. There was a tightness of the trapezius muscle in this patient. Clinical findings On examination, the range of motion (ROM) of the cervical and shoulder joint was taken before the physiotherapy treatment was started and it showed a reduction in the joint ROM of the involved or the operated side. Table 1 represents the ROM of the joints. Table 2 shows the manual muscle testing (MMT) of the musculature which was significantly reduced for the muscles surrounding the cervical region. Timeline of events On November 10, 2022, the patient was admitted to the surgery ward with the complaint of a swollen neck on the left side. On November 11, 2022, ultrasonography (USG) of the neck was carried out and showed well-defined hypoechoic lesions. On November 18, 2022, other investigations were carried out and the doctor suggested that the patient was fit for the surgery. On November 19, 2022, the patient was taken for surgery where the excision of the cyst was done. After four days of surgery on November 23, 2022, the physiotherapy call was noted and the physiotherapy session was carried out. After the successful recovery of the patient on December 5, 2022 patient was discharged. Investigations A computed tomography scan of the neck was performed which showed a single well-circumscribe peripherally enhancing multi-loculated cystic lesion in the left anterior cervical region surrounded by cervical vertebrae and muscle-like sternocleidomastoid muscle possibility of cystic lymphangioma that is shown in Figure 2 . USG neck was also performed which showed a well-defined hypoechoic solid adjacent to the carotid space with multiple septations and internal debris, probably benign. After the investigations, the patient was diagnosed with cystic lymphangioma/CH on the left side of the neck. Before performing the histological examination the differential diagnoses were salivary gland swelling, thyroglossal cyst, and lipoma. The examinations showed large, uneven, dilated lymphatic channels with collagenous stroma separating them were seen in the histopathology. The channels were lined by a single layer of benign flattened endothelium. The stroma revealed a strong infiltration of lymphocytic cells in addition to the development of lymphoid follicles. CH was the confirmed diagnosis. Physiotherapy management As part of post-operative physiotherapy, the patient is treated for pain, and contractures, improves ROM, prevents chest complications, and is made independent to conduct daily life tasks, and improves her quality of life. The rehabilitation regimen is given in Table 3 . Figure 3A , 3B , 4A , 4B shows the patient being rehabilitated. Follow-up and outcome measures The effectiveness of the rehabilitation was evaluated using ROM (Table 4 ), and MMT (Table 5 ). We can see the improvement in this patient by following regular exercise protocol. The ROM of the cervical joint was comparatively increased by the end of week two as compared to that of week one. MMT was also performed after the period of two weeks of treatment protocol to assess the improvement in the strength of the muscles, which was seen to be improved and the patient was able to perform the movements independently. Neck disability index was another outcome measure taken (Table 6 ).
Dhanashree Upganlawar and Neha R. Badwaik have equally contributed to the formulation of this case study and should be considered the first co-authors. Prasad P. Dhage and Priyanka A. Telang have reviewed and suggested changes and have equally contributed to making this case study.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50604
oa_package/db/2f/PMC10788696.tar.gz
PMC10788697
38226124
Introduction Infertility is the inability to conceive after having frequent and unprotected sexual contact for a year or more [ 1 ]. It can be caused by several factors, such as smoking, alcohol intake, obesity, exposure to environmental pollutants, toxins, anatomical variations, impairment in drainage, and development process [ 2 ]. Ten to 15% of couples are affected by the medical condition of infertility, which sends them on an emotional journey of hope, frustration, and concern. During this challenging time, couples may seek various medical interventions, such as in vitro fertilization (IVF) or intrauterine insemination (IUI), to increase their chances of conceiving [ 3 , 4 ]. Comprehending the underlying obstructive and non-obstructive causes of infertility are essential to create successful treatment plans. Non-obstructive azoospermia (NOA) occurs in around 10-15% of men with azoospermia and can result from hormonal imbalance, genetic factors, and congenital anomalies [ 5 ]. Obstructive azoospermia and NOA are two primary azoospermia. Sperm production is expected in obstructive azoospermia; however, a blockage prevents sperm from getting to the semen, whereas nonobstructive azoospermia is a more complicated condition involving the testes' failure to generate sperm. Genetic predispositions, hormone imbalances, or testicular tissue injury could lead to this. Both categories provide different difficulties for naturally conceiving, but it can be overcome using assisted reproductive technologies [ 6 ]. NOA occurs when there is an impairment in spermatogenesis. This impairment can be caused by hormonal imbalances, genetic abnormalities, infections, or exposure to toxins. As a result, the production of mature and functional sperm is significantly reduced or absent [ 3 , 7 ]. Reproductive medicine developments, such as testicular sperm aspiration (TESA) and percutaneous epididymal sperm aspiration (PESA), have completely changed the treatment landscape and given couples new options to realize their parental goals [ 8 ]. Infertility caused by azoospermia is examined together with the astounding changes that advanced assisted reproductive technology can achieve [ 9 ]. In patients with all immotile spermatozoa, theophylline and pentoxifylline are generally used to find feasible sperm for intracytoplasmic sperm injection (ICSI) [ 10 ]. Despite being a powerful technique, the use is debatable because of the potential harm to oocytes and embryos. The mechanism of theophylline and pentoxifylline on sperm vitality involves their effects on cyclic adenosine monophosphate (cAMP) levels and inhibition of phosphodiesterase. Theophylline and pentoxifylline are known as phosphodiesterase inhibitors. These medications increase intracellular cAMP levels, improving sperm motility and vitality by promoting smooth muscle and improving blood flow. This increase in cAMP level can affect sperm movement and metabolism, which ultimately contributes to improving sperm function and viability [ 11 ]. Hyaluronic acid is a transfer medium with a significant amount of the macromolecules hyaluronan, which promotes implantation, the primary glycosaminoglycan in follicular, oviductal, and uterine fluids in hyaluronan, which largely contributes to the high viscosity environment of the female reproductive system [ 12 ]. Since then, it has been discovered that hyaluronan also encourages decidualization of the endometrial lining and that more significant amounts of the substance more accurately mimic uterine fluids and intrauterine conditions, aiding in implantation. Bicarbonate buffer medium containing hyaluronan and recombinant human albumin. It can be used for the transfer of all stages of embryo development [ 13 ].
Discussion This case study demonstrates an effective way to treat infertility caused by azoospermia, a condition in which sperm can be identified in the ejaculate. The medical intervention consisted of surgical sperm retrieval, including PESA and TESA. While in PESA, no sperm was recovered, TESA did retrieve a few nonmotile sperm [ 15 ]. Two μL theophylline and 5 μL were used to ensure the viability of the recovered sperm. Theophylline is known for its ability to improve sperm motility and increase the chances of successful fertilization. The addition of 2 μL theophylline enhanced the sperm's ability to swim and navigate through the female reproductive tract, and the 5 μL pentoxifylline solution was used to provide essential nutrients and maintain the sperm quality [ 16 , 17 ]. Javed et al. [ 18 ] explained that long-term exposure causes increased damage to sperm deoxyribonucleic acid (DNA), while teasing increases reactive oxygen species, all of which led to enzymatic dilution and the probability of decreased fertilization and conception, as evidenced by the increase in the live birth rate (LBR) with PESA compared to TESA. This supports our study, as TESA showed positive results compared to PESA. Mahaldashtian et al. [ 19 ] explained that the acquisition of viable sperm with pentoxifylline was more effective in terms of 2 pronuclear (2PN) and embryo development in patients with the post-thawed testicular sperm extraction (TESE) protocol without negative impacts on the integrity of sperm DNA. Oraibi et al. [ 20 ] concluded in their study that theophylline significantly reduced the time required for sperm isolation from fresh testicular samples, improved embryo quality, significantly increased implantation rate in ICSI procedures, and significantly improved biochemical and clinical pregnancy outcomes in ICSI procedures performed on male patients. Six studies [ 15 - 20 ] served as a basis for application in our case, as we used both theophylline and pentoxifylline to get more promising results. Thus, four embryos were developed further using both theophylline and pentoxifylline, resulting in a positive pregnancy for our patient [ 21 ]. While this case presentation presents a thorough approach to diagnosing and treating obstructive azoospermia, several limitations must be acknowledged. For instance, the effectiveness of surgical exploration and assisted reproductive procedures can vary depending on individual patient, such as the location and severity of the blockage, the quality of recovered spermatozoa, and the general health of the partner [ 22 ].
Conclusions The combination of surgical sperm retrieval, ICSI, cryopreserved embryo, use of hyaluronic acid for embryos, and use of theophylline contributed to the positive outcome, resulting in the birth of a healthy male baby. The case demonstrates the importance of customized fertility treatments tailored to the couple’s specific needs, offering them a chance to fulfill their need for parenthood. Furthermore, the success of this case highlights advances in reproductive medicine and the effectiveness of assisted reproductive techniques. The use of cryopreserved embryos allows better timing and planning of embryo transfer, increasing the chances of a successful pregnancy. Furthermore, the use of hyaluronic acid helps improve embryo implantation rates and improve the overall success of the IVF procedure. On the contrary, theophylline plays a crucial role in supporting embryo development and increasing the chances of a healthy birth. Technology will continue to advance in the field of reproductive medicine.
In this report, we present the clinical management of a male patient diagnosed with non-obstructive azoospermia (NOA), a condition characterized by the absence of sperm in the ejaculate due to impaired spermatogenesis. A 37-year-old patient underwent two surgical procedures: testicular sperm aspiration (TESA) and percutaneous epididymal sperm aspiration (PESA). Surprisingly, the beta-human chorionic gonadotropins (β-HCG) testing that followed produced promising findings suggesting NOA syndrome may be reversible. Theophylline and pentoxifylline, phosphodiesterase inhibitors with immunomodulatory effects, were creatively used in this case study to increase sperm viability and activation after PESA. Hyaluronic acid was also used as an additional therapy because it is well known for aiding in sperm development and binding to oocytes. The patient underwent hyaluronic acid, which can potentially increase the fertilization rate and improve the selection of sperm. This in-depth case study offers insightful information on the effective management of NOA by combining theophylline, pentoxifylline, and hyaluronic acid. The results highlight the ability of these therapies to revive spermatogenesis, offering a cutting-edge method of treating male infertility. More research is required to clarify the underlying processes and confirm the effectiveness of this strategy in more successful reproductive medicine therapies.
Case presentation An infertile couple came to the Test Tube Baby Center of Central India for fertility treatment. Primary infertility has been the diagnosis of a 37-year-old man and 34-year-old woman for the last four years. They were both descriptively informed of all procedures, merits, and demerits, and their informed consent was obtained. Medical History of the Couple A 37-year-old man and his 34-year-old wife sought medical evaluation for infertility. The couple has been trying to conceive for four years with no success. They are psychologically distressed by their inability to develop and seek appropriate solutions and therapeutic choices. For the past four years, the couple has engaged in regular unprotected intercourse. Despite their efforts, they have not yet had a successful pregnancy. The female partner has a normal menstrual cycle and no documented fertility concerns, implying that the issue is most likely with the male partner's reproductive health. The male partner has no relevant medical history. He has not had severe illness, surgery, or persistent medical concerns. He denied ever having had a sexually transmitted infection (STI). No significant history of infertility or genetic disorders were reported in the immediate family. The male partner does not smoke or consume alcohol or recreational drugs. He leads a healthy lifestyle. Clinical Findings The male patient appeared satiated and in good general health. The vitals of the patient were within normal limits. The patient exhibited typical secondary sexual characteristics, including facial hair growth, deep voice, and average muscle mass. The genital examination revealed no abnormalities. The testicles were of moderate size and consistency with no palpable mass or tenderness. Patient semen analysis revealed the absence of spermatozoa in multiple semen samples. The patient was advised to undergo a fructose test to confirm azoospermia, and the trial resulted in a negative result after the multiple fructose test, showing the presence of fructose in the semen sample. This indicates that the patient is experiencing azoospermia, which means that there are no sperm in the ejaculate. The male patient was advised of a hormonal test, and the results showed that serum follicle-stimulating hormone (FSH) was elevated, indicating impaired spermatogenesis. The serum level of the luteinizing hormone (LH) was within the level of normal range, and the testosterone was also within the normal range (Table 1 ). Treatment plan In 2021, the patient initially went to the Wardha Test Tube Baby Centre (WTTBC). For the procedure, the couple had counseling. A quick antagonist procedure was used to begin the preparation for oocyte pickup (OPU) for the female patient. The leading follicle was stimulated with FSH/human menopausal gonadotropin (hMG) to develop to a diameter of 14-16 mm before the gonadotropin-releasing hormone (GnRH) antagonist was administered. In a short antagonist protocol, 2.5 or 5 mg/d hMG was administered in addition to human chorionic gonadotropin (hCG) to stimulate ovarian function. In conclusion, the GnRH agonist and FSH/hMG were co-administered up to the final triggers of the brief protocol. Because the GnRH antagonist causes oocyte maturation, 10,000 IU of hCG was administered 36.5 hours before ovum pick-up. Sperm extraction through PESA surgery was completed on the same day. After eight weeks of PESA, no sperm was recovered. TESA was performed and occasional and non-motile spermatozoa were recovered. An unsighted procedure called PESA was performed with oral sedation and local anesthesia. During PESA, the andrologist was able to see the exposed epididymal tubules. A 20-22 G butterfly needle was used to attach a 2 mL syringe to 1 mL of a sperm wash medium. The upper scrotum served as the entry point for the needle insertion under anesthesia. Subsequently, the syringe was employed to establish and maintain suction. A hemostat was utilized to cover the butterfly needle. The butterfly needle was then placed in the epididymis while still inside the scrotum after the tip had been placed between the forefinger and thumb. After insertion, the hemostat was removed, allowing the vacuum to push the sperm into the tubing. The needle was constantly pushed forward and backward without being removed from the skin to inject the most sperm into the tube and syringe. The sperm was placed on a glass slide by the embryologist for analysis. After pulling the incision back to reduce the chance of a hematoma, pressure was applied to the area. Similarl to PESA, TESA also uses a tiny butterfly needle and a hemostat to maintain suction. An andrologist can handle the testes after skin blockage and sperm cord while keeping the epididymis immobile posteriorly. The scrotal skin is pulled taut before using the gun syringe to create the greatest vacuum possible. Subsequently, the needle is specifically inserted into the focal region of the middle front of the testes. The needle is kept inside the testicular parenchyma by gently moving it forward and backward while maintaining Hoover suction. An incision is removed after 10 passes while monitoring the suction. As the needle is inserted, tubules should follow. These tubules are pulled with forceps, cut, and collected on the skin's surface. Subsequently, the syringe is removed from the gun, and its suction is directed into a glass Petri dish for the embryologist to examine with a stereomicroscope. An embryologist investigated whether motile or immotile sperm was suctioned. Sperm are extracted from the surrounding tissue after being freed from the seminiferous tubules, where they had been carried. To test the viability of the sperm, theophylline and pentoxifylline are used [ 14 ]. In this investigation, theophylline solution was used and powdered pentoxifylline solution was created with a concentration of 5 mm. Five μl was used for ICSI drops, 40 μl was used for the medium in the drop that had immotile sperm in which the theophylline solution was introduced, and 10 μl was used for the medium drop where the sperm were rinsed after exposure to theophylline and pentoxifylline. The theophylline or pentoxifylline solution was incubated at 37°C. The drop containing the sperm received 2 ml of theophylline solution and 5 ml of a 5 mm pentoxifylline solution to help activate the sperm. Theophylline or pentoxifylline was given to each 40 μl drops in cases where there were many recovered oocytes to obtain more viable sperm before looking for viable sperm for ICSI the ICSI. The plate was then set on a heated plate at 37°C for 10 minutes. The quality and morphology of the selected sperm was confirmed using a 40x lens under a microscope. Sperm that were alive and motile visibly were selected, rinsed in a medium solution immobilized in a PVP solution, and inserted into the egg. The subsequent steps, which included oocyte denudation, ICSI, embryo culture, fertilization verification, embryo development verification, and embryo transfer, followed our standard laboratory protocol. Follow-up and outcome Ten oocytes were recovered in the OPU, of which eight oocytes were M2 (matured). Two oocytes were in stage M1 (immature). Eight mature oocytes underwent ICSI. Six oocytes were fertilized on day 1. Two blastocysts were formed on day 5. Cryopreserved embryos were used for embryo transfer. After two months, the embryo transfer was performed on day 12 of the menstrual cycle. Only the good-quality grade 3 BA embryos were chosen (Figure 1 ). The selected embryos were placed in 50 μl of hyaluronic acid. Before transfer to the uterine cavity, the embryos were incubated for 10 minutes in the transfer medium in a 6% CO 2 atmosphere at 37°C. The urine pregnancy test (UPT) was positive, and serum β-hCG proved a successful pregnancy with a value of 307 mlU/ml. Ultrasound sonography (USG) revealed the formation of a single sac during pregnancy. At 37 weeks of pregnancy, a healthy male baby was delivered.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50623
oa_package/ef/75/PMC10788697.tar.gz
PMC10788698
38226098
Introduction The inception of clear aligner treatment marked a significant shift in orthodontic therapeutics in the late 20th century. This innovative technique, brought to life by Align Technology, introduced a transparent aligner for dental alignment, providing a revolutionary alternative to traditional alignment methods [ 1 , 2 ]. The Invisalign appliance unveiled in 1999, offered a solution for mild malocclusions, addressing the rising demand among adults for a way to improve their smiles without resorting to braces. Align Technology (San Jose, United States) has since made strides in enhancing its technology, refining its 3D interface, integrating diverse attachment designs to assist various types of tooth movement, and upgrading its software to calibrate precise force determinations for each tooth movement, thereby allowing for the treatment of a wide array of malocclusions [ 3 ]. The iTero Element scanner (Align Technology, San Jose, United States), another innovative tool in orthodontics, demonstrated increased speed and negligible errors, thus providing an economic advantage for practitioners. However, it was observed that the scanning accuracy was somewhat lower in the upper jaw compared to the lower jaw [ 4 - 7 ]. Introduced by Cadent LTD and later reimagined by Align Technology in 2013, the iTero® scanner was developed to work in synergy with the OrthoCad® and Invisalign Clincheck® software (both produced by Align Technology, San Jose, United States). This integration enables clinicians to review and adjust the proposed treatment plans when utilizing Invisalign® technology [ 8 , 9 ]. In 2013, Diamond Braces (Englewood, United States) introduced the SmartTrack material, a multi-layer polymer, replacing the standard aligner material. This innovation significantly reduced the peak intensity and duration of pain, as well as insertion pressure [ 10 ]. Align Technology has progressively updated its G-series software, incorporating clinical innovations to improve treatment outcomes. This evolution began with G3 in 2010, followed by G4 in 2011, G5 in 2014, and the most recent update, G8, with smart force aligner activation, introduced in 2021 [ 11 ]. Earlier studies have used 3-dimensional (3D) superimposition of predicted treatment outcomes onto the actual models as a technique to assess Invisalign's accuracy in predicting various types of tooth movements. However, it was reported that multi-trial digital superimpositions were repeatable with reduced errors. The average accuracy of anterior tooth movement with Invisalign was 41% [ 11 , 12 ], while canine rotation and incisor intrusion were the least accurately predicted movements. On the other hand, all horizontal and extrusion movements of the incisors were found to be the most accurate [ 13 ]. Tien et al. reported mean accuracy rates of 72.2% and 82.3% for upper and lower intercanine widths, respectively [ 14 ], while upper and lower intermolar widths showed accuracy rates of 63.5% and 79.8%. Recent reports indicated that Invisalign predicted an overall mean accuracy of 76.85% [ 15 ]. It's important to note that most of these studies are based on the Invisalign G7 series. Invisalign G8 introduces a series of enhancements designed to improve the predictability of deep-bite correction. Among these advancements is the implementation of balanced anterior en-masse intrusion, a technique that ensures uniform movement of anterior teeth. The G8 system also incorporates a newly optimized attachment specifically designed for the lower lateral incisor. This innovation enhances the grip and control during the movement of these teeth, thereby increasing the predictability of the treatment outcome. Another significant modification in the G8 version is the overcorrection of lower incisor intrusion and the flattening of the Curve of Spee. Overcorrection is a strategy implemented to account for potential relapse, while the flattened Curve of Spee helps in achieving better occlusion and function. There's a need for more research to increase the predictability of treatment since predicted tooth movements do not completely coincide with realized tooth movements. Hence, this study aimed to investigate the impact of the latest updates added to the G7 and G8 Invisalign series on actual versus predicted outcomes and the percentage accuracy of treatment. The null hypothesis examined in this research states that there is no significant difference in the percentage accuracy of aligner therapy administered using the Invisalign G7 and G8 series.
Materials and methods Ethical approval This retrospective cohort study was authorized by the Institutional Review Board of Riyadh Elm University and assigned the approval number SRP/2021/87/505/484. Study sample The samples used in this study were selected at the investigators' discretion and were designated as convenience samples. The information was obtained from private orthodontic practices in Riyadh, Saudi Arabia, where orthodontists were all Invisalign® Diamond providers or above who carried out treatment regimens. The study group comprised patients with different malocclusion types who received non-extraction Invisalign® treatment, including all its variants and improvements, resulting in the submission of at least two approved ClinChecks®. Wearing aligners lasts three weeks, with an average treatment time of one year. The Invisalign treatment plan was provided by the ClinChecks program for patients treated throughout the years (2016-2022). For each patient, we created a digital treatment plan using a unique program called ClinCheck. We were provided with the anticipated treatment result at the end of this strategy. We employed a technology that captures a 3D scan of the patient's mouth to see the actual outcome of the treatment. We were able to compare the expected and real results using iTero Element. We superimposed the digital models from ClinCheck onto the iTero Element's 3D images using OrthoCAD, another program. By using this comparison, we were able to quantify a number of tooth motions, including the angle formed by the upper and lower front teeth, the amount of overlap between them, and the widening of the space between the canines and molars. Only patient records that satisfied our particular requirements were included. Records that did not fit these requirements were not included in the analysis. Selection criteria The study included patients who were healthy and had been exclusively treated with Invisalign. It was necessary for these patients to demonstrate good compliance during their therapy. They had to have undergone both an initial and a concluding intraoral digital scan. Furthermore, only patients who had used at least one refinement set were included. On the other hand, individuals were excluded from the study if they presented with systemic disease, syndromes, or if they had a cleft lip and palate. Non-compliance with aligner wear also resulted in exclusion. Patients who had undergone dental procedures or oral surgery prior to the final scan were not included in the study. The study also excluded any patient under the age of 18 and those with any missing teeth, with the exception of the third molars. The patient files were selected from the ClinChecks program in conformity with the inclusion criteria, while the age was determined by subtracting the current year from the patient's birthdate. The data and scans were immediately linked to the ClinChecks program and an iTero. The scanner was used to capture and superimpose the first refinement scan from the patient's record with the first scan of the initial treatment in the system. The software for tracking progress and the report are regarded as Align's confidential information and cannot be given to any third party. Two digital models were presented. The models can be viewed and evaluated in four planes, with one representing the initial dentition scan and the other representing the dentition for its most recent scans (sagittal, vertical, transverse, and arch length). The stages of each aligner were recorded independently to compare the predictable outcome of ClinChecks with the first refinement scan. The progress assessment was divided into four categories: initial measurement, current measurement, programmed measurement, and final measurement. In the context of this study, four postgraduate students who were selected for this investigation were significant in both the management of samples and data analysis. In a setting like ours, postgraduate students were involved in all stages of the research process under the supervision and guidance of three experienced orthodontists. This included the collection and management of samples, conducting the analyses, interpreting the results, and contributing to the writing of the final report or study. In terms of quality control and supervision, experienced professionals and faculty advisors oversaw the entire process to ensure the accuracy and reliability of the data analysis. They ensured that the research methodologies were correctly applied, data analysis was conducted appropriately, and the results were accurate and reliable. Statistical analysis During this phase, an array of statistical procedures were executed to discern the relationship between anticipated and actual outcomes. Specifically, both descriptive and inferential statistical methodologies were employed to evaluate the variables of interest. Initially, separate computations of the central tendency (mean) and dispersion (standard deviation) were accomplished for the G7 and G8 series of Invisalign treatments. These calculations facilitated an understanding of the typical behavior of these variables and the extent of their deviation. As a subsequent step, we performed a normality test - a statistical process used to determine if a dataset is well-modeled by a normal distribution. By assessing the skewness and kurtosis of our data set, we gauged its approximation to a Gaussian distribution. To quantify the discrepancies between expected and observed results for the G7 and G8 series, we employed the independent samples t-test. This inferential statistical test allowed us to analyze whether the mean difference between the two groups (expected and actual outcomes) was statistically significant. To measure the degree of linear correlation between predicted and actual variables, we utilized Pearson's correlation coefficient. A strong positive correlation would imply a high degree of predictability between the expected and actual outcomes, while a weak or negative correlation would suggest less predictability or an inverse relationship. To verify the precision of our findings, we juxtaposed the actual results against the predicted outcomes. The accuracy of the results was determined by the degree of congruence between these two data sets. These calculations were executed utilizing the statistical software IBM SPSS Statistics for Windows, Version 25 (Released 2017; IBM Corp., Armonk, New York, United States). Statistical significance was inferred if the p-value was less than 0.05, suggesting that the observed results would be highly unlikely under the null hypothesis.
Results A total of 108 patients (male=34 (31.5%) and female=74 (68.5%)) treated with Invisalign G7 and G8 series were considered in this study. Table 1 shows the distribution of patients treated with the G7 and G8 Invisalign series. Most patients had class I malocclusion 72 (66.7%), with 21 patients treated with G7 and 51 treated with the G8 Invisalign series. Twenty-three patients with class II malocclusion were treated with the G7 (n = 6) and G8 (n = 17) Invisalign series. Only 13 patients with class III malocclusion were treated with the G7 (n = 7) and G8 (n = 8) series (Table 1 ). The overall mean and standard deviation values of vertical distance (2.91±1.42), intermolar distance in the lower arch (52.68±3.15), overjet (2.71±1.06), and inter-incisal angle (138.24±12.18) were higher than the predicted model. However, the predicted model showed higher mean and standard deviation values for intercanine distances in the upper (36.94±1.57) and lower arches (28.48±1.40) and upper intermolar distances (57.21±2.91). The mean difference between predicted and achieved measures differed significantly in vertical dimension, upper intercanine and intermolar distances, overjet, and inter-incisal angle (p<0.05). All the studied variables showed a significant positive correlation between predicted and achieved measurements (p<0.05) except for the lower intermolar distance (r=-0.092, p=0.345). The highest mean percentage accuracy was observed with lower intermolar distance measurement (100.82±2.96), and the lowest percentage accuracy was found with inter-incisal angle (29.72±39.91), as shown in Table 2 . The mean±SD, correlation, and percentage accuracy of predicted and achieved variables of the G7 and G8 Invisalign series are shown in Figure 1 and Table 3 , respectively. The G7 series demonstrated higher achieved mean and standard deviation values for vertical distance (2.53±1.33), intermolar distance in the lower arch (52.40±2.77), overjet (2.83±0.95), and inter-incisal angle (135.58±10.35) than the predicted model. However, the mean difference between predicted and achieved models was significant for vertical dimension and overjet measures (p<0.05). On the other hand, the G7 series showed higher predicted values of upper intercanine distance (37.01±1.45), lower intercanine distance (28.70±1.46), and upper intermolar distance (57.01±2.38) than the achieved models. The mean difference between the predicted and achieved models was statistically significant in the upper intermolar distance. In addition, all the studied variables measured using the G7 series demonstrated a statistically significant correlation between predicted and achieved models (p<0.05). The G7 series showed the highest mean percentage accuracy of (100.68±3.80) with the lower intermolar distance, and the lowest mean percentage accuracy of (34.47±44.06) was observed with inter-incisal angle measurement. Similarly, the G8 series demonstrated higher achieved mean and standard deviation values for vertical distance (3.08±1.44), lower intermolar distance (52.81±3.32), overjet (2.65±1.11), and inter-incisal angle (139.46±12.81) than the predicted model. However, the mean difference between predicted and achieved models was significant for vertical dimension and inter-incisal angle measures (p<0.05). Contrarily, the G8 series demonstrated higher predicted than achieved mean and standard deviation values for upper intercanine (36.91±1.63), lower intercanine (28.37±1.37), and upper intermolar distance measurements (57.3±3.14). The mean difference between the predicted and achieved models was statistically significant in upper intercanine and upper intermolar distances (p<0.05). Moreover, all the studied variables demonstrated a statistically significant correlation between predicted and achieved models (p<0.05) except for lower intermolar and overjet measures. The G8 series showed the highest mean percentage accuracy of (100.89±2.52) with the lower intermolar distance, and the lowest mean percentage accuracy of (27.53±37.98) was observed with the inter-incisal angle measurement. The comparison of the mean percentage accuracy of variables measured between the G7 and G8 Invisalign series is shown in Figure 2 and Table 4 , respectively. The G7 versus G8 intercanine distance in lower (61.28±47.67 vs. 80.51±38.32), intermolar distance in upper (61.72±47.67 vs. 69.95±44.11), and intermolar distance in lower (100.68±3.80 vs. 100.89±2.52) were relatively higher in the G8 series than the G7. The accuracy percentage was significantly higher with the G8 series than with the G7 regarding the intercanine distance in the upper arch. In contrast, the G7 series showed a higher mean percentage accuracy of vertical distance (91.11±84.83 vs. 76.76±65.45), overjet (58.44±35.17 vs. 53.71±45.87), and inter-incisal angle (34.47±44.06 vs. 27.53±37.98) than the G8 series.
Discussion As per our findings, the specific advantages promoted by the manufacturer for the Invisalign® G8 series were not detailed. However, given the iterative nature of the Invisalign versions, it can be assumed that the G8 series was designed with enhancements aimed at improving treatment predictability and outcome accuracy over its predecessors. As for the study's validation of these assumed claims, it's important to note that the findings you provided indicate that both the G7 and G8 series showed statistically significant differences between predicted and achieved values for multiple variables. For the G7 series, the achieved values were higher than predicted for vertical distance, intermolar distance in the lower arch, overjet, and inter-incisal angle. The mean differences between predicted and achieved values were significant for vertical dimension and overjet measures. On the other hand, the G7 series showed higher predicted values than achieved for upper intercanine distance, lower intercanine distance, and upper intermolar distance, with a significant mean difference in the upper intermolar distance. Turning to the G8 series, the observed mean and standard deviation values for vertical distance, lower intermolar distance, overjet, and inter-incisal angle were higher than the predicted model. The predicted model showed higher mean values for the upper and lower intercanine distances and upper intermolar distances. The mean difference between predicted and achieved measures was significantly different in several areas, including vertical dimension, upper intercanine and intermolar distances, overjet, and inter-incisal angle. Although several authors have studied the effectiveness of Invisalign, the impact of the Invisalign G series on predictable outcomes and percentage accuracy still needs to be addressed. The average accuracy with clear aligners ranged from 55% to 72% [ 16 ]. In this study, we retrospectively evaluated the difference between the G7 and G8 regarding the predictable outcome of Invisalign treatment. Most patients had class I malocclusion (66.7%), 21.3% had class II malocclusion, and only 12.0% had class III malocclusion. Invisalign® recommends using sagittal correctors before beginning aligner therapy for severe class II malocclusions [ 17 , 18 ] irrespective of the need for extractions. However, G8 demonstrated slightly superior accuracy to G7 in measuring distances between canine and molar teeth. In terms of upper jaw measurements, G8 displayed approximately 81% accuracy compared to G7's 61% for intercanine distance. Conversely, G7 surpassed G8 in vertical distance accuracy, showing about 91% accuracy as opposed to G8's roughly 77%. Furthermore, G7 demonstrated superior accuracy in measurements of overjet and interincisal angle. Our retrospective study further yielded an unexpected result: the accuracy for intermolar distance in the lower jaw exceeded 100% in both G7 and G8. In accordance with our findings, Grünheid et al. [ 19 ] reported approximately 88% accuracy for lower jaw expansion. Invisalign's accuracy for upper jaw expansion was around 73%, although more tipping was observed than predicted in the digital treatment plan, a phenomenon also noted by Haouili et al. [ 20 ]. The average accuracy of maxillary arch transverse expansion was 70%, irrespective of the type of tooth, as reported by Galluccio et al. [ 21 ]. In the context of the broader research, the accuracy of predicted outcomes has been a crucial area of investigation. Alswajy et al. [ 15 ] conducted a comprehensive study focusing on assessing the accuracy of ClinCheck® in measuring various dental dimensions, including sagittal, vertical, transverse, and arch length dimensions. Their study reported a high correlation between the predicted and achieved outcomes for the upper intercanine width, showcasing an impressive accuracy of 97.97% with a mean difference of 0.53±1.05 mm. The results of our retrospective study align with these findings, demonstrating that the Invisalign G8 series also exhibits a high accuracy in the upper intercanine measurements (80.51±38.32), albeit less than that reported by Alswajy et al. The mean difference observed between predicted and achieved outcomes in our study was 0.246±0.555 for the G8 series, slightly lower than reported by Alswajy et al [ 15 ]. Similarly, the G7 series showed a mean difference of 0.103±0.873 in the upper intercanine measurements, indicating the G-series' ability to predict outcomes with reasonable accuracy. However, there is also evidence of discrepancies between predicted and achieved outcomes. For instance, Krieger et al. [ 22 ] reported a slight negative difference (-0.13±0.59) between the achieved and predicted tooth movement in the upper intercanine distance, and a positive difference (0.13±0.59) in the lower intercanine distance. This variation underscores that while Invisalign treatments can predict outcomes with a high degree of accuracy there may still be slight variations in the final results depending on individual patient factors and treatment complexities. Taken together, these studies contribute to our understanding of the predictive accuracy of Invisalign treatments and the potential for slight variations between predicted and achieved outcomes. These findings can be instrumental for practitioners in managing patient expectations and optimizing treatment planning. As far as the literature is concerned in this regard, the field of Invisalign research is dynamic and diverse, with different studies focusing on various aspects of treatment outcomes. For instance, Charalampakis et al. [ 13 ] emphasized that changes in the distance between upper canine teeth were the most noticeable, attributing this to the fact that these teeth have the longest roots. Contrarily, our findings spotlighted the most significant difference in the lower molar distance, with a difference of -2.997±49.485 in G7 and -6.242±48.682 in G8 between the expected and actual outcomes. This divergence from Charalampakis et al.'s findings highlights the complexity of predicting outcomes in Invisalign treatment and suggests that factors other than root length may impact tooth movement. In terms of overjet accuracy, our findings echoed those of Krieger et al. [ 22 ] and Alswajy et al. [ 15 ]. In G7, the overjet was 58.44% accurate, with a difference of -0.365 between expected and actual outcomes, similar to the findings of Krieger et al. In the G8 series, the accuracy was slightly lower at 53.71%, with a mean difference of -0.185, aligning with Alswajy et al.'s findings. This suggests a relatively consistent degree of predictability across different series of Invisalign aligners. However, there are nuances to consider. Buschang et al. [ 23 ] and Tsai et al. [ 24 ] found that the actual overjet was slightly higher than predicted, indicating that the control over overjet and tooth tilt might be reduced when certain features are used. This underscores the need for detailed patient-specific treatment planning and highlights the importance of considering the potential influence of specific features on treatment outcomes. In terms of vertical movement (the up-down direction), the G7 series was more accurate than the G8, with accuracies of 91.11% and 76.76%, respectively. Krieger et al. [ 22 ] have reported that vertical movement with Invisalign is more difficult, resulting in larger deviations from expected outcomes. This aligns with our findings, which showed a mean difference of -0.779 between expected and actual outcomes in G7. Alswajy et al. [ 15 ] reported a similar difference in the amount of overlap of the upper and lower front teeth (overbite). However, Castroflorio et al. [ 25 ] found that the lower front teeth were more likely to move vertically than the back teeth. Ko et al. [ 17 ] also reported that vertical and side-to-side corrections may be difficult. Several studies [ 2 , 13 , 19 , 20 ] have reported that the least predictable movements are in the vertical direction, especially the movement of teeth into the jawbone (intrusion). Conversely, Bilello et al. [ 26 ] found that intrusion was quite predictable. Jaber et al. [ 27 ] reported that moving teeth out of the jawbone (extrusion) is difficult with aligners and that the aligners covering the biting surfaces of the teeth can prevent the bite from settling properly. This study found the interincisal angle to have an accuracy percentage of 34.47±44.06 and a mean difference between the predicted and achieved values of -2.109±9.471 for the G7 series. In contrast, G8 showed 27.53±37.98 percent accuracy, and the mean difference between predicted and achieved was -2.908±9.417, representing the lowest percentage of all the parameters. In contrast, Alswajy et al. [ 15 ] reported a higher percentage accuracy of 96.23% in their study. Despite the use of various calculation formulas, there was a limitation in this study, with aberrant results in lower intermolar distance. The vertical distance, overjet, and inter-incisal angle measurements indicated that G7 is more accurate than G8, regardless of a larger sample size for G8. However, the larger sample size somewhat offset the bias that could be attributed to the findings. Furthermore, it should be noted that we only investigated the recent generation series of Invisalign (G7, G8) to determine if it has improved the actual outcome of the aligner treatment over the projected one for each plane, not per tooth for every movement. It seemed logical to develop strategies to increase the predictability of achieving the desired results after identifying the limits of clear aligner therapies. A range of conceptual approaches to improve the efficacy and efficiency of clear aligners are provided by Bowman [ 28 ].
Conclusions Based on the retrospective study, it was observed that the Invisalign G-Series updates had a significant impact on improving the predicted outcomes. The mean difference between the predicted and achieved measurements showed significant variations in several variables, such as vertical dimension, upper intercanine and intermolar distances, overjet, and inter-incisal angle. However, all studied variables demonstrated a significant positive correlation between predicted and achieved measurements, except for the lower intermolar distance. When comparing the G7 and G8 Invisalign series, both demonstrated higher achieved mean values for vertical distance, intermolar distance in the lower arch, overjet, and inter-incisal angle than the predicted model. However, the G8 series showed relatively higher accuracy in the intercanine distance in the lower arch, and intermolar distances in both the upper and lower arch, as compared to the G7 series. The G7 series, on the other hand, demonstrated a higher mean percentage accuracy of vertical distance, overjet, and inter-incisal angle than the G8 series. Conclusively speaking, the Invisalign G-Series updates were found to be effective in improving predicted outcomes, with some variations between the G7 and G8 series. This information may be beneficial to orthodontists since G8 series overrides the G7 series for specific orthodontic measurements and treatment outcomes. However, it is important to consider that while the series of updates shows promising results, individual patient factors and treatment complexities can influence the final outcomes.
Background: Understanding the real-world implications of periodic changes to orthodontic appliances can provide valuable insights for future treatment strategies and patient outcomes. This study aimed to investigate the impact of the latest updates added to the G7 and G8 Invisalign series on actual versus predicted outcomes and the percentage accuracy of the treatment. Method: This retrospective study was conducted in private orthodontic practices in Riyadh, Saudi Arabia. Orthodontists carried out Invisalign® treatment using the latest updates added to the G7 and G8 Invisalign series. The study group comprised patients with different malocclusion types who received non-extraction Invisalign treatment. The Invisalign treatment plan was provided by the ClinChecks program (Invisalign, San Jose, United States) for patients treated throughout the years (2016-2022). Different dimensions were assessed to record predicted and actual treatment outcomes with the aid of iTero® (Align Technology, San Jose, United States) and ClinCheck® (Invisalign, San Jose, United States). The percentage accuracy was determined using the formula (100%-((Predicted-Actual)/Predicted) *100%). Results: A total of 108 patients (male = 34 (31.5%) and female = 74 (68.5%)) treated with Invisalign G7 and G8 series were considered in this study. The overall mean and standard deviation values of vertical distance (2.91±1.42), intermolar distance in the lower arch (52.68±3.15), overjet (2.71±1.06), and inter-incisal angle (138.24±12.18) were higher than the predicted model. However, the predicted model showed higher mean and standard deviation values for intercanine distances in the upper (36.94±1.57) and lower arches (28.48±1.40) and upper intermolar distances (57.21±2.91). The G7 versus G8 intercanine distance in lower (61.28±47.67 vs. 80.51±38.32), intermolar distance in upper (61.72±47.67 vs. 69.95±44.11), and intermolar distance in lower (100.68±3.80 vs. 100.89±2.52) were relatively higher in the G8 series than the G7. The accuracy percentage was higher with the G8 series than with the G7 regarding the intercanine distance in the upper arch. In contrast, the G7 series showed a higher mean percentage accuracy of vertical distance (91.11±84.83 vs. 76.76±65.45), overjet (58.44±35.17 vs. 53.71±45.87), and inter-incisal angle (34.47±44.06 vs. 27.53±37.98) than the G8 series. Conclusion: The percentage accuracy of aligner therapy administered using the Invisalign G7 and G8 series demonstrated no significant variation.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50615
oa_package/27/fa/PMC10788698.tar.gz
PMC10788699
38226119
Introduction Weight gain that is abnormal or excessive and poses a risk to health is what is meant by the terms "overweight" and "obesity." Overweight is defined as a body mass index (BMI) of 25, and obesity is a BMI greater than 30. Since 1975, the global rate of obesity has nearly tripled according to the WHO. According to a recent study, the prevalence of obesity was 35.6% in Saudi Arabia [ 1 ]. A BMI of 40 kg/m 2 with or without an obesity-related comorbidity or 35 kg/m 2 in the presence of an obesity-associated illness is stated as an indication for bariatric surgery in both the Saudi Clinical Practice Guidelines for the Management of Obesity and the SA Society for Metabolic and Bariatric Surgery, which are the same indications as in the United States [ 2 , 3 ]. Bariatric surgery is a procedure used to assist obese patients in losing weight. Bariatric surgery comes in various forms, and each one alters how the digestive system functions. Some varieties reduce the capacity of the stomach, which causes a person to feel full more quickly and eat less food [ 4 ]. Since 2014, bariatric surgery has remained the most frequently performed elective procedure [ 5 ]. In 2019, there were approximately 27,000 bariatric procedures in Saudi Arabia [ 6 ]. This procedure is quite helpful in treating many other conditions in addition to obesity, including diabetes, high blood pressure, sleep apnea, and high cholesterol [ 7 ]. However, with this rapid weight loss, most patients will experience loosening of the skin, which is unwanted by most patients. Regardless of their age or gender, most post-bariatric patients report having excess skin in various body areas. Most post-bariatric patients have physical, functional, and emotional limitations caused by excess skin. As a result, abdominal contouring surgery is the most requested procedure after bariatric surgery [ 8 ]. Body contouring surgeries aim to remove the excess sagging skin and fat, resulting in an overall better shape and general appearance. Some of the body contouring procedures are body lift and abdominoplasty, which involve removing the extra skin that hangs over the abdomen and are the most performed contouring procedures. Others include arm lift, breast lift, and thigh lift, which follow the same principle [ 9 ]. A study regarding patients' long-term satisfaction after plastic surgery following gastric bypass showed that a greater number of patients had a much better quality of life in almost all subdomains: self-esteem, social life, workability, sexual activity, and physical activity [ 10 ]. Another qualitative study done in the USA also concluded the importance of body contouring surgery in enhancing physical, psychological, and social well-being [ 11 ]. Despite all that has been mentioned, there are still a few people undergoing contouring surgery after bariatric surgery. According to a study done in Saudi Arabia, which involved 128 patients who underwent bariatric surgery, 78.1% of patients expressed a desire for body contouring surgery. Only 18 patients (14%) have undergone body contouring surgery [ 12 ]. Another study done on adolescents shows that only 25 (12.6%) of the 198 study participants for whom BCS information was available received 41 body contouring surgeries following bariatric surgery [ 13 ]. As a result, there is a need for a study to be conducted on the barriers that could prevent people who underwent bariatric surgery from getting body contouring surgery. A cross-sectional study was conducted to explore the factors that determine patients' decision whether to have body contouring surgery in association with their socioeconomic variables and the psychological aspects as well.
Materials and methods Study methodology A cross-sectional study was conducted over a period of six months after approval from the ethical committee. The study was conducted among a population of females and males aged 16 years and above. The data were collected using a purposive sampling technique. Study population This study was carried out among the female and male population who had undergone bariatric surgery in Saudi Arabia. Sample size The total number of participants in our sample was at least 300 in Saudi Arabia. It was calculated using the formula (n = z^2 * P * (p-1) / e^2). Here, N represents the sample size, z is the confidence level at 95%, P is the expected true proportion (0.5), and e is the margin of error (0.05). Sample technique A simple random sample was used. Inclusion criteria Females and males who had undergone bariatric surgery and were aged 16 and above were included. Subjects were required to be residents of Saudi Arabia. Exclusion criteria Individuals below the age of 16 and those who were not residents of Saudi Arabia were excluded. Study procedure Participants who fulfilled the inclusion and exclusion criteria and provided consent were enrolled. Each subject anonymously filled out the questionnaire. The results were statistically analyzed using Statistical Product and Service Solutions (SPSS, version 29) (IBM SPSS Statistics for Windows, Armonk, NY). The study timeline was six months from obtaining the necessary approvals. The data collected from the IBM SPSS program were interpreted accordingly. Interpretation of the collected data was performed, and potential solutions were proposed when applicable. Data management The data were stored in a secure location, and only approved personnel had access to the data. Privacy and confidentiality were prioritized, and the study ensured that participants' identities remained anonymous. The data were analyzed using the IBM SPSS program and interpreted by the investigator. Statistical analysis Data analysis was performed using SPSS, version 29. Categorical variables were presented using frequency and percentages. Numerical variables were summarized using minimum, maximum, mean, and standard deviation. The chi-square test was used to compare variables. The significance level was set at 0.05.
Results Our study included 662 participants in Saudi Arabia considering body-contouring surgery after bariatric surgery. Table 1 shows that the majority were female (386, 58.3%), aged 19-29 (44.3%), overweight 195(29.5%), and Saudi nationals (606, 91.5%). About 50% were married, and 71.8% had a university-level education. Most participants were employed (329, 49.7%), and a significant portion had a monthly income below 2500 SAR (262, 39.6%). Table 2 shows the features of participants related to bariatric surgery. Most had undergone bariatric surgery (558, 84.3%), with a significant weight reduction from 121.1 to 84.7 kg. Gastric sleeve was the most common procedure (485, 73.3%). A substantial number were satisfied with the surgery results (293, 44.3% strongly agree), and many found the experience attractive (220, 33.2%). However, a notable portion suffered from excess or saggy skin (311, 47.0%). Table 3 presents difficulties faced by participants due to saggy skin after bariatric surgery. Approximately 18.9% of participants reported experiencing "rashes/itching due to excess skin" several days a week. A significant 17.5% encountered challenges in "running/walking quickly because of excess skin" for several days each week. Emotional difficulties such as "lack of interest/pleasure in activities" (133, 20.1%) and "feeling frustrated/depressed/hopeless" (123, 18.6%) were prevalent over multiple days a week. Many participants faced issues with clothing choices, with "difficulty finding suitable clothes due to excess skin" reported by 19.5%. Additionally, a noteworthy proportion (102, 15.4%) experienced hindrances in their daily life due to excess skin on several days a week. Table 4 shows insights into participants who underwent body contouring or body lift surgery after bariatric surgery. Among the 662 participants, 8.3% had body contouring surgery. The types of surgeries included body lift (13, 23.6%), liposuction (19, 34.5%), and tummy tuck (26, 47.2%). Most procedures occurred greater than one year after bariatric surgery (31, 56.4%). A significant portion considered additional procedures (31, 56.4%), while 43.6% did not. Figure 1 shows the factors influencing the decision to undergo contouring surgery. The most significant factors were a lack of self-confidence (18.6%), body dissatisfaction (14.2%), and discomfort with excess skin (11.1%). A reasonable cost (9.9%) concerns about oily skin (7.7%), and family members' experiences (5.5%) also played roles in the decision-making process. Table 5 shows the factors and barriers related to body-contouring surgery in patients who previously underwent bariatric surgery. Among 213 participants, 32.2% considered body-contouring surgery to improve physical appearance. The timing for considering surgery varied, with 52.5% contemplating it less than a year after bariatric surgery. Barriers included cost (171, 80.2%), fear of a second surgery (97, 45.6%), concerns about side effects (90, 42.2%), social factors (69, 32.3%), and busy surgery schedules (32, 15.0%). A notable portion (58, 27.2%) attempted to reduce costs by visiting other surgeons. Table 6 shows the sociodemographic factors and their significance in relation to the consideration of body-contouring surgery among patients who previously underwent bariatric surgery. Notably, females (143, 67.1%) were more likely than males (70, 32.9%) to consider it, with a significant p-value of 0.003. Similarly, nationality played a significant role, with Saudis (182, 85.4%) more inclined than non-Saudis (31, 14.6%) to consider surgery (p-value < 0.001). Occupation also showed significance, as employed (105, 49.3%) were more likely to consider surgery than others (p = 0.038).
Discussion Obesity rates have tripled globally since 1975, with a 35.6% prevalence in Saudi Arabia. Bariatric surgery, a common treatment, often leads to excess skin. Despite its benefits, few patients choose body-contouring surgery. Our study aims to identify barriers, including socioeconomic and psychological factors. Our findings on body-contouring surgery barriers after bariatric surgery offer valuable insights. We discuss results in relation to existing literature and their implications for clinical practice and future research. Our study found a predominantly female (58.3%) participant group, which is in line with previous research showing women's higher interest in post-weight loss body contouring as women reported significantly more problems, discomfort, and amount of excess skin (p < 0.05) than men [ 14 ]. Most participants (44.3%) fell within the 19-29 age group, consistent with the trend of bariatric surgeries being common among younger adults [ 15 ]. The high representation of Saudi nationals (91.5%) aligns with the country's demographic profile. Females were notably more inclined to consider body-contouring surgery (67.1%) than males (32.9%), aligning with prior research highlighting a higher prevalence of such procedures among women. This underscores the well-established link between gender, body image, and the desire for body-contouring surgery, as shown by Giordano et al. in which 69.4% of females have a desire to undergo post-bariatric body-contouring surgery [ 16 ]. Non-Saudi nationals (14.6%) were less inclined to consider body-contouring surgery than Saudi nationals (85.4%, p < 0.001). Cultural norms, healthcare access, and socioeconomic status likely contribute to this distinction. Occupation influenced the consideration of body-contouring surgery, with employees, students, and housewives showing higher interest, possibly due to flexible schedules. This underscores the importance of considering diverse patient motivations. Our study revealed that 84.3% of participants had undergone bariatric surgery, aligning with Saudi Arabia's high prevalence of these procedures for increasing obesity prevalence with 41% in men and 78% in women by 2022 [ 17 ]. The substantial weight reduction, from 121.1 kg to 84.7 kg, highlights bariatric surgery's effectiveness in achieving significant weight loss, consistent with prior research on various bariatric procedures [ 18 ]. Gastric sleeve, representing 73.3% of bariatric procedures, aligns with the global popularity due to its effectiveness and lower complications [ 19 ]. Participant satisfaction (44.3% strongly agree) underscores bariatric surgery's positive impact, improving weight, health, and overall quality of life, consistent with prior research where 70-90% are generally satisfied with bariatric surgeries [ 20 ]. A significant portion (47.0%) of participants experienced excess or saggy skin after weight loss, a common issue due to the skin's struggle to adapt to reduced body size [ 21 ]. Excess skin causes physical discomfort, skin-related problems, and emotional distress, motivating consideration of body-contouring surgery [ 11 ]. Several key factors, identified by our study, influence the decision to undergo body-contouring surgery. Notably, a lack of self-confidence was the most significant motivator (18.6%), followed by body dissatisfaction (14.2%) and discomfort with excess skin (11.1%). These findings align with prior research emphasizing body image concerns as primary drivers for body-contouring surgery [ 22 ]. Cost emerged as a significant factor for some participants (9.9%), underscoring its impact as a barrier to accessing these procedures. While not ranking as high as body image-related factors, it may be a limiting factor [ 23 ]. Thus, addressing financial considerations is crucial to enhance accessibility. The influence of family members' experiences (5.5%) suggests that social support and shared family dynamics may shape individuals' choices in pursuing body-contouring surgery. The results align with other research highlighting social networks' impact on healthcare decisions [ 24 ]. There are several barriers to body-contouring surgery in post-bariatric surgery patients. Cost was a prominent hindrance, with 80.2% citing it as a deterrent, aligning with prior research highlighting cost as a significant barrier. Concerns about a second surgery (45.6%) and potential side effects (42.2%) were substantial barriers, possibly due to the complexity and associated risks. Societal judgments about cosmetic surgery (32.3%) contributed to hesitancy, emphasizing the importance of reducing stigma. Additionally, busy surgical schedules (15.0%) underscored the need for improved access and efficient scheduling [ 25 ]. Limitations This study has some limitations, including its cross-sectional design, which limits the ability to establish causation. Additionally, the study was conducted in a specific region (Saudi Arabia), and the findings may not be generalizable to other populations. Further research with larger and more diverse samples is needed to validate these findings and explore additional factors that may influence the decision-making process for body-contouring surgery.
Conclusions Our study sheds light on the considerations, barriers, and sociodemographic factors associated with body contouring surgery among individuals who have previously undergone bariatric surgery in Saudi Arabia. The findings underscore the multifaceted nature of these decisions and the need for a patient-centered approach to care. Addressing barriers and providing comprehensive support can help individuals make informed choices regarding body contouring surgery and improve their overall well-being.
Introduction The prevalence of obesity has experienced a significant global increase in recent years, emerging as a prominent worry affecting numerous individuals throughout various countries, including Saudi Arabia. Bariatric surgery, a common treatment, often leads to excess skin. Despite its benefits, few patients choose body contouring surgery. A cross-sectional study aims to identify barriers, including socioeconomic and psychological factors. Methodology This is a cross-sectional study conducted in Saudi Arabia. Participants included those who underwent bariatric surgery. Data were collected through questionnaires and analyzed by Statistical Product and Service Solutions (SPSS, version 29) (IBM SPSS Statistics for Windows, Armonk, NY). Results Our study involved 662 Saudi participants with post-bariatric surgery, primarily females (386, 58.3%), aged 19-29 (44.3%). Most had undergone bariatric surgery (558, 84.3%), mainly gastric sleeve (485, 73.3%). Excess skin was a common issue (311, 47.0%). Difficulties included rashes and emotional distress (e.g., depression). About 8.3% had body-contouring surgery, including body lifts (13, 23.6%) and liposuction (19, 34.5%). Factors influencing surgery decisions included self-confidence (123, 18.6%) and cost (9.9%). Barriers for 32.2% considering surgery included cost (80.2%) and fear of a second surgery (45.6%). Females (67.1%), Saudis (85.4%), and employed individuals (49.3%) were more likely to consider surgery (p < 0.05). Conclusion Our study highlights the complexity of body-contouring decisions after bariatric surgery in Saudi Arabia. Cost and fear were barriers; females, Saudis, and employed individuals were more likely to consider surgery. A patient-centered approach, addressing barriers, and offering support are crucial for informed choices and improved well-being.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50558
oa_package/54/e1/PMC10788699.tar.gz
PMC10788700
38226121
Introduction Acute abdominal pain in pediatric patients presents a diagnostic challenge due to the diverse etiologies that range from benign to potentially life-threatening conditions. Omental cysts, though infrequent, represent a distinct category of abdominal pathology characterized by a fluid-filled sac originating from the greater omentum [ 1 ]. While these cysts are generally benign, their occurrence in children, particularly when accompanied by torsion of the greater omentum, is a rare clinical occurrence. There is limited literature on pediatric cases involving omental cysts with torsion [ 2 - 5 ]. Despite the usual lack of symptoms in the majority of omental cysts, torsion of the greater omentum can result in acute abdominal pain, emphasizing the need for timely recognition and intervention [ 3 ]. This case report details an instance of omental cyst torsion in a pediatric patient presenting with acute abdominal pain.
Discussion Omental cysts are cystic lesions arising from the omentum, a double-layered fold of the peritoneum that hangs down from the stomach and covers the abdominal organs [ 4 ]. While the prevalence rate is not explicitly discussed in this context, it is noteworthy that omental cysts are considered rare entities in the medical literature. The etiology and pathogenesis of omental cysts, though not definitively elucidated, involve multifaceted mechanisms. Proposed hypotheses include developmental anomalies, suggesting abnormal embryological processes contribute to cystic degeneration later in life, and trauma, either acute or repetitive, triggering cystic changes in the omentum. Inflammatory processes, such as infections or chronic irritation, have also been implicated. The diversity of histopathological subtypes, including lymphatic malformations, mesothelial cysts, and pseudocysts, adds complexity to understanding omental cyst formation [ 1 , 2 ]. The clinical presentation of omental cysts encompasses a spectrum of manifestations, with abdominal pain emerging as the predominant symptom. Patients commonly report localized dull or colicky pain, often associated with gradual or sudden onset. Abdominal distension may accompany cyst enlargement, resulting in a palpable mass upon examination. Gastrointestinal symptoms such as nausea, vomiting, and alterations in bowel habits can arise, attributed to mechanical pressure on the gastrointestinal tract [ 6 , 7 ]. Notably, incidental findings through imaging studies underscore the importance of considering omental cysts in asymptomatic cases [ 3 ]. Rarely, complications like torsion, bleeding, or rupture can lead to an acute abdomen. Accurate diagnosis of omental cysts is imperative for guiding appropriate management strategies. Ultrasound serves as a valuable initial screening tool, offering real-time imaging and aiding in the identification of cystic structures within the omental region. In the evaluation of intra-abdominal cystic lesions, sonography plays a pivotal role as a non-invasive and readily available imaging tool. It provides real-time visualization, aiding in distinguishing fluid-filled structures from solid masses and offering insights into the characteristics of the cyst [ 2 - 5 ]. Computed tomography scans provide detailed cross-sectional images, facilitating precise localization, characterization, and assessment of the cyst's relationship with adjacent structures. Fine-needle aspiration may be utilized in certain cases to obtain fluid for cytological analysis, confirming the cystic nature and excluding other pathological entities [ 1 , 6 ]. In considering the differential diagnosis of intra-abdominal cystic lesions, it is imperative to explore a range of potential etiologies to ensure accurate and comprehensive clinical evaluation. Cysts within the abdominal cavity may arise from various structures, including the liver, pancreas, kidneys, and reproductive organs. Common differential considerations encompass cystic neoplasms, such as pancreatic cystic neoplasms or liver cystadenomas, as well as congenital cystic lesions like choledochal cysts or renal cysts. Additionally, infectious etiologies, such as abscesses or hydatid cysts, and inflammatory conditions like pseudocysts, should be considered [ 3 - 5 ]. The optimal management of omental cysts involves an individualized approach, considering factors such as the size and location of the cyst, associated symptoms, and the overall health status of the patient [ 3 ]. Surgical excision stands as the primary and most widely accepted intervention, particularly for symptomatic or large cysts. The surgical approach may vary from laparoscopic to open methods, depending on the complexity of the cyst and the surgeon's expertise [ 5 ]. However, a conservative approach, including observation and periodic imaging, may be considered for asymptomatic or small cysts, especially in cases where surgery poses a higher risk. Percutaneous drainage, guided by imaging, is another alternative, particularly in cysts with clear fluid content [ 3 ]. In our case, emergent surgery was essential as the patient had an omental cyst with omental torsion.
Conclusions This case report underscores the diagnostic and therapeutic challenges posed by omental cysts with torsion in pediatric patients, emphasizing the significance of prompt recognition and intervention. The rare prevalence of such cases underscores the importance of considering diverse etiologies in the assessment of acute abdominal pain in children. The detailed presentation of the patient's history, clinical examination, laboratory results, and imaging findings contributes to a comprehensive understanding of this uncommon clinical entity. The successful intraoperative correction of torsion and excision of the omental cyst, guided by careful preoperative assessment, exemplify the effective management of this condition.
Acute abdominal pain in pediatric patients poses a diagnostic challenge due to diverse etiologies, ranging from benign to life-threatening conditions. Omental cysts, though rare, constitute a distinctive subset characterized by a fluid-filled sac arising from the greater omentum. We present the case of a three-year-old male who presented with severe abdominal pain localized to the right upper quadrant, progressively worsening over 24 hours. Physical examination revealed tenderness and a palpable mass. Laboratory investigations indicated mild leukocytosis. Contrast-enhanced computed tomography identified an omental cyst with torsion. Intraoperatively, the cyst arising from the greater omentum exhibited torsion, leading to ischemic changes. Surgical excision successfully corrected the torsion and removed the cyst. Omental torsion is a rare complication of omental cysts. Prompt recognition and surgical intervention are crucial, emphasizing the importance of considering diverse etiologies in acute pediatric abdominal pain.
Case presentation A three-year-old male child presented to the Pediatric Emergency Department with a chief complaint of severe abdominal pain. The parents reported that the pain had progressively worsened over the past 24 hours and was localized to the right upper quadrant of the abdomen. There was no history of trauma or significant illness preceding the onset of symptoms. The child had a normal birth history, characterized by an uneventful pregnancy, full-term delivery, and no notable complications during the perinatal period. Developmental milestones were age-appropriate, with the child achieving expected milestones such as rolling over, sitting, crawling, and walking at appropriate ages. There were no significant delays or concerns noted in the child's developmental progression. Upon physical examination, the patient appeared distressed and was guarding his abdomen. Vital signs were within normal limits for age. Specifically, the heart rate was 110 beats per minute, the respiratory rate was 20 breaths per minute, the blood pressure was 90/60 mmHg, and the temperature was 37.0 degrees Celsius. The child's weight at the time of presentation was 15 kg. Abdominal examination revealed localized tenderness and a palpable mass in the right upper quadrant. The rest of the systemic examination was unremarkable. Given the acuity and intensity of the abdominal pain, coupled with the physical findings, an urgent work-up was initiated. Laboratory investigations were notable for a mild leukocytosis (white blood cell count of 12,000/mm3) with a left shift. Serum electrolytes, liver function tests, and amylase levels were within normal ranges (Table 1 ). To further delineate the abdominal mass and assess for any underlying pathology, a contrast-enhanced computed tomography scan was performed. The computed tomography scan revealed a well-defined oval-shaped cystic lesion in the mid-abdomen with no internal septation, calcification, or soft tissue component. The cyst measured 8.2 x 4.7 x 3.4 cm on maximum dimensions. Notably, adjacent to the cystic mass, there was a fat density lesion that exhibited streaks of whirling and concentric patterns, which were associated with the omental cyst and torsion (Figure 1 ). These radiological findings were highly indicative of an omental cyst with torsion, underscoring the vascular compromise associated with the torsion that contributed to the acute abdominal pain experienced by the patient. The absence of concerning features such as septation or calcification helped further establish the diagnosis. This imaging supported the decision for surgical exploration and subsequent excision of the omental cyst to alleviate the torsion and prevent potential complications. Intraoperatively, a cystic mass arising from the greater omentum was identified and confirmed. Torsion of the omental pedicle was evident, leading to compromised blood supply and ischemic changes within the omental tissue. The cyst was carefully dissected, and the torsion was corrected. Postoperative histopathological examination confirmed the diagnosis of an omental cyst with evidence of hemorrhagic infarction (Figure 2 ). The postoperative course was uneventful, with the child gradually recovering from the abdominal pain. Intravenous antibiotics were administered postoperatively to prevent any potential infection. The patient was closely monitored for signs of complications, and oral intake was reintroduced once bowel function returned to normal.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50606
oa_package/4e/fa/PMC10788700.tar.gz
PMC10788701
38226131
Introduction and background Reversible posterior leukoencephalopathy (RPLS) was first identified by Hinchey et al. in 1996 [ 1 ]. To emphasize the shared involvement of gray and white matter in RPLS, Casey et al. coined the name posterior reversible encephalopathy syndrome (PRES) in 2000 [ 2 ]. PRES is a neurologic condition that is rapidly developing and has distinctive clinical and radiological characteristics [ 3 ]. White matter edema that is reversible and primarily affects the back of the cerebral hemispheres is a hallmark of PRES [ 4 ]. Headache, seizures, visual problems, decreased mental function and nausea are typical manifestations with headache and seizures being the most frequent symptoms [ 5 , 6 ]. The major imaging finding during the acute stage is vasogenic edema in the subcortical parietal-occipital white matter [ 7 ]. It has also been reported that PRES affects other parts of the brain, including the brain stem, cerebellum, basal ganglia, and frontal lobes [ 1 ]. It can be caused by infections, immunosuppression, transplantation, connective tissue abnormalities, uremia, and hypertensive crises [ 2 , 8 ]. To capture both typical and atypical cases of PRES, Fugate et al. proposed the following steps for the diagnosis of PRES: one or more acute neurological symptoms described above, one or more risk factors such as severe hypertension or blood pressure fluctuations, renal failure, immunosuppressant therapy or chemotherapy, eclampsia or autoimmune disorder, and finally, brain imaging that could show bilateral vasogenic edema, cytotoxic edema with pattern of PRES or even be normal [ 9 ]. While the precise pathophysiological process behind PRES is unknown, one leading hypothesis postulates that quickly developing hypertension can cause the blood-brain barrier to break down through hyperfusion resulting from the cerebral blood flow autoregulation mechanism mounting an insufficient response [ 9 ]. Systemic lupus erythematosus (SLE) is a chronic autoimmune disease with clinical features ranging from mild skin rash to severe organ damage [ 10 ]. It is a multisystem disease and can affect the joints, brain, lungs, kidneys, and blood vessels of the patient [ 11 ]. SLE most commonly affects females and in particular females of childbearing age [ 10 ]. When treating SLE, hydroxychloroquine is the first drug of treatment with glucocorticoids being used to address flare-ups of the disease [ 10 ]. Higher doses of methylprednisolone are sometimes used in case there is a significant risk of organ damage [ 12 ]. Immunosuppressants are recommended if the patient doesn’t respond to the initial line of treatment or cannot take glucocorticoids within the recommended daily range for long-term use [ 13 ]. Antineutrophil cytoplasmic antibody-related vasculitis, psoriatic arthritis, systemic sclerosis, SLE with nephritis, and SLE without nephritis were among the rheumatic conditions linked with PRES (odds ratio (OR), 9.31, 4.61, 6.62, 7.53, and 2.38, respectively) [ 14 ]. The neurological system is affected by SLE in 12% to 95% of patients [ 15 ]. SLE-PRES patients frequently experience a significant rise in blood pressure, renal failure, and humoral retention, particularly when high doses of methylprednisolone or immunosuppressants are used to treat it. Some academics have thus hypothesized the interplay of the aforementioned elements to be the pathogenic mechanism of SLE-PRES. Autoimmune inflammation or ischemia alterations brought on by SLE (such as vasculitis, thrombosis, embolism, and vasospasm) could also result in PRES. Other researchers have suggested that rather than being immediately brought on by the underlying lesion of SLE, PRES should be seen as a subsequent consequence of SLE during treatment [ 16 ]. PRES is more common in lupus patients with poorly managed blood pressure, renal illness, or those on immunosuppressive medication [ 2 ]. In this systematic review, we aim to explore the relationship between hypertension and its possible role in the development of PRES in SLE patients.
Conclusions An unrecognized neuropsychiatric manifestation in SLE patients is PRES. It is difficult to diagnose and treat PRES since it might be a symptom of active lupus illness or a side effect of immunosuppressant therapy, obscuring the particular role of SLE itself in developing PRES. However, we discovered a common factor among all of the patients we were able to recover from the data we reviewed: hypertension. All patients who presented with PRES had elevated blood pressure findings, albeit it was unclear if this was their first episode of elevated blood pressure or if they were previously taking antihypertensive medication and had adhered to it. However, certain SLE individuals may present with conditions that mirror how PRES presents; as a result, it's critical for clinicians to make the correct diagnosis by understanding the clinical aspects and neuroimaging findings of PRES for quick recognition of the condition and subsequent therapy. Aggressive treatment with antihypertensives and other medications should be initiated as soon as PRES is diagnosed. Clinicians should always keep PRES as a differential in neuropsychiatric lupus patients and be aware of its associations since prompt detection and treatment of PRES are crucial in preventing ischemia/infarction and long-term neurological impairments. For future research, it would be good to have clinical studies to further refine the best treatment option for SLE patients presenting with PRES.
Posterior reversible encephalopathy syndrome (PRES), also known as reversible posterior leukoencephalopathy syndrome (RPLS), is a rare disorder that most commonly affects the posterior part of the brain. Two common causes of PRES are hypertension and autoimmune diseases such as systemic lupus erythematosus (SLE). This systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 recommendations and aimed at finding the association between hypertension and PRES in SLE patients. We searched medical databases such as PubMed, PubMed Central (PMC), Cochrane Library, and Multidisciplinary Digital Publishing Institute (MDPI) for relevant medical literature. The identified papers were screened, subjected to inclusion and exclusion criteria, and ran through quality appraisal tools, after which 16 papers were finalized. The finalized papers explored the roles of hypertension in SLE patients diagnosed with PRES. In this review, we identified a link between hypertension and PRES-SLE patients. We aimed to explain the role of hypertension in the development of PRES in SLE patients. This study also explains the different treatment modalities to be used for treating the patients presenting with PRES and differentiates other neuropsychiatric illnesses commonly present in SLE patients from PRES. It's important to make an accurate clinical diagnosis by understanding the clinical features and neuroimaging results of PRES for future care since it may even be incurable in some circumstances.
Review Methodology This systematic review was conducted using the Preferred Reporting Items for Systemic Review and Meta-Analyses (PRISMA) 2020 guidelines [ 17 ]. Search Sources and Strategy We searched PubMed, PubMed Central (PMC), Multidisciplinary Digital Publishing Institute (MDPI), and Cochrane Library to search for the relevant literature. We used various combinations of SLE, PRES, and hypertension keywords to search all databases. We also used a MeSH strategy to query PubMed for relevant literature: (("Lupus Erythematosus, Systemic"[Mesh]) AND ("Hypertension"[Mesh])) AND ("Posterior Leukoencephalopathy Syndrome"[Mesh]). Table 1 shows the databases used and the identified numbers of papers for each database. Inclusion and Exclusion Criteria Only papers written in English or those with a full-text English translation were included in our selection of articles. We only included research publications with human subjects. In cases where the complete text of the papers could not be retrieved, articles were excluded. Gray literature as well as those that included pregnant people or age groups younger than 14 years were excluded. Selection Process The selected articles were relocated to Endnote (Clarivate Plc, Philadelphia, United States, London, United Kingdom), and any duplicate papers were eliminated. Each article was reviewed by looking at the titles and abstracts. Any disagreements about eligibility were discussed and resolved by general agreement. Only pertinent articles were reviewed when the shortlisted articles were given a full-text evaluation. Shortlisted articles were the only ones that met the inclusion and exclusion criteria. Quality Assessment of the Studies Using the appropriate quality assessment techniques, the papers that made the shortlist were evaluated for quality. The Newcastle-Ottawa method was used to rate the quality of observational studies, while the Assessment of Multiple Systematic Review (AMSTAR) tools were used to rate the quality of systematic reviews. For narrative reviews, the Scale for the Assessment of Narrative Review (SANRA) was used. The Joanna Briggs Institute (JBI) checklist was utilized to examine case reports. In this systematic review, only studies that met the quality appraisal criteria were considered. Data Collection Process After the articles were finalized for the systematic review and extracted, the primary outcomes were assessed along with other necessary information. Results Study Identification and Selection We identified a total of 197 relevant articles using all databases. In total, 67 duplicate articles were removed before screening them in detail. After screening these articles by reviewing titles and abstracts and retrieving full texts, 30 articles were shortlisted. The shortlisted full-text articles were assessed for eligibility and quality, and 16 were finalized for review. The selection process of the studies is shown in Figure 1 in the PRISMA flowchart. The articles were assessed for eligibility using the Newcastle Ottawa tool. Table 2 below shows the results of the quality appraisal. Case reports were finalized using the JBI quality check tool, and narrative reviews using the SANRA checklist. Outcomes Measured The primary outcome extracted from the finalized research papers was the association of hypertension with PRES in SLE patients. Other outcomes assessed were other risk factors of PRES and treatment of PRES in SLE patients. A few studies explored differential diagnoses of neuropsychiatric symptoms in SLE patients. Study Characteristics Table 3 includes a summary of the included observational studies. Table 4 contains a summary of the included case-cohort studies. Table 5 contains a summary of the included meta-analysis studies. Table 6 contains a summary of the included narrative reviews. A few case reports on PRES-SLE were also reviewed to find their association with hypertension. Table 7 displays the summary of the included case reports. Discussion Pathophysiology, Causes, Clinical Features, and Investigation Findings of PRES The precise pathophysiological process behind PRES is still unknown. Currently, three hypotheses have been put forth, including (i) cerebral vasoconstriction with subsequent brain infarcts, (ii) cerebral autoregulation failure with ensuing vasogenic edema, and (iii) endothelial damage with disruption of the blood-brain barrier causing fluid and protein transudation in the brain [ 20 ]. Various experimental experiments, neuroimaging, and post-mortem analyses support the latter two ideas. Byrom's experiment from the 1950s showed that a sudden rise in arterial blood pressure in rats led to functional vascular alterations that temporarily enlarged the posterior region of their brains. After the blood pressure returned to normal, the edema completely disappeared. The brain's vasculature automatically regulates to maintain a constant cerebral perfusion pressure (CPP) in response to abrupt increases in mean arterial pressure (MAP). This autoregulation is mostly accomplished through sympathetic nervous system-mediated compensatory cerebral vasoconstriction. A sudden rise in MAP that exceeds the autoregulatory ability of the cerebral vasculature might cause dilatation of the arterioles because the vertebrobasilar vasculature has somewhat less sympathetic innervation than the internal carotid artery system. Following arteriolar dilation, plasma, cells, and extravasate protein led to posterior cerebral edema and possibly additive endothelial harm from uremia and cytotoxic drugs [ 24 ]. Acute-onset headache, vomiting, seizures, abnormalities in visual perception, and alterations in the parieto-occipital white matter on MRI are all signs of PRES, which is both a clinical and radiological entity [ 2 ]. PRES patients can also show signs of quadriparesis with spasticity in limbs and hypertonia in all the limbs with extensor plantar response [ 27 ]. Figure 2 below highlights the typical clinical features of PRES. Apart from the causes mentioned already, a few cases of PRES have been linked to procedures like angiography and cardiac catheterization with IV contrast, the implantation of a left ventricular assist device (LVAD), neurosurgery, and measles vaccination [ 24 ]. Immunosuppressants like cyclosporine and tacrolimus can cause PRES through several routes without significantly raising blood pressure. In addition to calcineurin inhibitors, cisplatin, IV Immunoglobulins, cytarabine, L-asparaginase for treating acute lymphoblastic leukemia, and bevacizumab monoclonal antibody for treating colon cancer were also described as frequently reported drugs that cause PRES [ 24 ]. WBC counts greater than 9 x 106, urine protein to creatinine ratio greater than 1, hemoglobin lower than 10 g/dL, cerebral hemorrhage, and brainstem involvement are risk factors for worse outcomes in PRES patients [ 26 ]. The first line of diagnosis for this illness is a head MRI [ 16 ]. Repeated and advanced neuroimaging may be considered if standard MRI results were normal or failed to explain neuropsychiatric signs and symptoms. As a result, for PRES patients, the appropriate scan should be done at the appropriate time [ 20 ]. PRES could develop even in the absence of severe hypertension due to the cytotoxic effect of SLE. Consequently, PRES may show as the first sign of SLE rather than a side effect of treatment [ 20 ]. Association of Hypertension With PRES in SLE Patients In most instances, PRES is connected to hypertensive emergencies though this isn't always the case. The etiology of PRES has been connected to hypertension, which increases cerebral blood flow and eventually breaches the blood-brain barrier, creating vasogenic edema in the cortex [ 25 ]. High BP isn't always recorded in PRES, though. Even when drug levels are within the therapeutic range, immunosuppressive or cytotoxic drugs can have a direct toxic effect that can cause endothelial damage, decreased tissue perfusion, cytotoxic edema, blood-brain barrier disruption, and vasogenic edema. It's interesting to note that vasogenic edema can turn cytotoxic and cause cerebral infarction [ 23 ]. While the precise pathological mechanism for PRES in SLE patients is not clear, the aforementioned process involving hypertension in combination with endothelial damage and autoimmune activation, which SLE patients are at higher risk for, could potentially explain part of the pathophysiology [ 26 ]. We compiled and analyzed different patients with PRES-SLE and the association of elevated blood pressure among them. Table 8 summarizes the findings of a few studies that observed hypertension in SLE patients presenting with PRES. Reviewing the literature above, we observed that most of the previously described patients of SLE with PRES had severe hypertension (>170/110 mmHg) and renal failure. We discovered the following findings regarding their BP as indicated in Table 9 below after analyzing the data from two studies that distinguished PRES-lupus patients from PRES caused by other reasons. Another similar study compared the blood pressure readings between patients with PRES-lupus and patients with lupus only. They observed elevated blood pressure (>150/90 mmHg) in five out of 14 PRES-lupus patients compared to only one patient having hypertension out of six lupus patients [ 18 ]. In the above three studies [ 24 , 1 , 18 ], we observed a link between PRES and hypertension and increased hypertension severity in PRES-lupus patients compared to PRES patients without lupus. But at the same time, the severity of hypertension is not significantly associated with the intensity of the clinical and radiological manifestation of PRES. Similarly, on analyzing the case reports literature of eight patients of SLE presenting with PRES, we found seven patients to have elevated blood pressure (>150/90mmHg) and only one patient to be normotensive. All patients were further treated with antihypertensive medications and other supportive treatments [ 16 , 25 - 30 ]. In one report, the patient had a recurrence of symptoms after full resolution once when her blood pressure rose again, and she was treated again to normalize her blood pressure [ 29 ]. Hence, blood pressure maintenance is important during the course of the disease for SLE patients and should not be ignored. Role of PRES in SLE Patients In a case-control study, the prevalence of PRES was shown to be up to 0.43% in patients with SLE. Although PRES is uncommon among SLE patients, it is linked to a high mortality rate [ 26 ]. Therefore, it is important to understand PRES in SLE patients and its association with hypertension. PRES has been noted in lupus patients, particularly those with immunosuppressive medication, renal illness, or poorly managed blood pressure [ 2 ]. When high doses of methylprednisolone or immunosuppressants are used to treat serious diseases, patients with SLE-PRES frequently display a significant rise in blood pressure, renal failure, and humoral retention [ 16 ]. A study of 98 patients with SLE and PRES was recently analyzed in three retrospective reviews [ 29 ]. Although the beginning of PRES was associated with an SLE flare-up in more than 90% of cases, other variables such as hypertension (82-95%), renal insufficiency (73-84%), and the use of immunosuppressive medications (50%) were frequently present [ 29 ]. In patients who had PRES, the SLE Disease Activity Index (SLEDAI) criteria for lupus was greater (by about six points), indicating a more severe case of the disease at the time of diagnosis [ 26 ]. Additionally, renal impairment, hypoalbuminemia, and thrombocytopenia are independent risk factors for PRES and may be related to SLE [ 26 ]. In a study, J. Merayo-Chalico et al. examined the expression of various serum cytokines such as IL, as well as vascular endothelial growth factor (VEGF) and soluble CD40 ligand (sCD40L), in PRES-SLE patients and contrasted those levels with levels in SLE patients without PRES and in healthy controls. They analyzed the reports of 32 people (14 PRES-SLE patients, six healthy controls, six SLE patients in remission, and six SLE patients with active disease). They discovered that PRES-SLE patients had significantly greater IL-6 and IL-10 levels than the other groups (P = 0.013 and 0.025, respectively). Additionally, there was a positive association between the levels of IL-6 and IL-10 (r = 0686, P = 007). Regarding the levels of sCD40L, VEGF, or other cytokines, there were no variations between groups [ 18 ]. Treatment of PRES in SLE Patients PRES might not always be completely reversible, despite the name. According to data available on the consequences of PRES, there have been cases of cerebral infarction, subarachnoid hemorrhage, coma, and death. Most of PRES's management is supportive. Another crucial PRES management component is a treatment that addresses the underlying cause [ 2 ]. Patients with lupus-related PRES should have a 10-25% reduction in MAP or a diastolic blood pressure reading of less than 100 mmHg within the first two hours [ 24 ]. A target mean arterial blood pressure between 105 and 125 mmHg has been suggested [ 23 ]. Parenteral antihypertensive drugs should be used to quickly lower blood pressure while closely monitoring it to avoid hypoperfusion, as a blood pressure drop that occurs too quickly can lead to hypoperfusion and cause end-organ damage such as cerebral infarction, abrupt myocardial infarction, and renal shutdown [ 23 , 24 ]. Using nimodipine, a calcium channel blocker, is potentially helpful in preventing cerebral vasospasm [ 23 ]. The selection of antihypertensive medications for SLE patients with lupus-related PRES should be cautious because several current antihypertensive medications, such as hydralazine and methyldopa, might cause drug-induced lupus and are therefore inappropriate for SLE patients. In ICUs where close hemodynamic monitoring is easily accessible, IV antihypertensive medications like nitroprusside and labetalol (with alpha and beta blockade activity) are favored [ 24 ]. Anti-epileptic drugs should be used to manage acute seizures, no matter their cause. This should be done until the PRES symptoms stop manifesting [ 2 ]. Phenytoin or carbamazepine should not be used to treat PRES-related seizures in SLE patients because they can lead to drug-induced lupus and complicate the clinical picture of the patient's pre-existing lupus [ 24 ]. PRES can be brought on by using corticosteroids and immunosuppressive drugs, according to the findings published by Mak et al., who also found that these medications' administration needs to be discontinued immediately when this happens. Although PRES may indicate lupus activity, IV methylprednisolone, and cyclophosphamide are still the most often prescribed medications for people with lupus activity [ 24 ]. If someone has a significant fluid retention problem, hemodialysis may be necessary [ 2 ]. Another SLE PRES patient with a blood pressure of 160/98 in the Mani et al. review showed complete recovery from quadriparesis after initiating hemodialysis therapy, anti-edema medications, and antihypertensive medications along with cyclophosphamide and methylprednisolone in 10 days [ 27 ]. Her PRES was triggered due to grade 4 lupus nephritis and not high blood pressure, which led to fluid retention. In a similar study, another patient, a 19-year-old woman with severe grade 4 lupus nephritis and SLE disease activity index of 39, was treated with hemodialysis because of deranged RFT but eventually presented with clinical features of PRES two weeks later [ 29 ]. This shows that along with elevated BP, other factors like deranged kidney function, abnormal kidney biopsy, and SLEDAI severity also play an important role in PRES development. They should be kept in mind while evaluating and treating patients. Furthermore, intracranial bleeding (OR 14, 1.1-187.2, P = 0.04) and brainstem involvement (OR 10.9, 1.3-90.6, P = 0.003) were found to be predictive of a poor outcome in PRES patients [ 22 ]. In patients with PRES whose seizures and hypertension are poorly controlled, irreversible lesions can develop due to the transition from vasogenic to cytotoxic edema, indicating a change into intracerebral hemorrhages and infarcts, ultimately resulting in lifelong neurological impairment [ 24 ]. At the same time, despite a quick drop in blood pressure, a patient with SLE PRES in another evaluation did not entirely recover vision. Hence the word reversible may consequently be misleading because 50% of cases may result in persistent deficiency, particularly in the area of vision [ 28 ]. Differential Diagnosis of PRES in SLE Patients Imaging results and reversibility are major factors that help in separating PRES from other possible diagnoses, such as bilateral ischemic strokes in the posterior cerebral artery territory, central venous sinus thrombosis, demyelinating diseases, lupus encephalitis, cerebral vasculitis, and infectious or metabolic encephalopathy, which are all common in SLE patients, and prevent unnecessary extra testing [ 2 , 23 ]. The primary differential diagnosis of PRES is bilateral ischemic strokes in the posterior cerebral artery region. This distinction is significant because, while blood pressure should not be aggressively addressed in cases of cerebrovascular infarction, care of PRES requires quick control of blood pressure [ 23 ]. In lupus patients, PRES, neuropsychiatric SLE (NPSLE), and CNS problems can occur, and their clinical picture overlaps in most cases; hence, it is frequently difficult to distinguish between these conditions, especially in the early stages. The likelihood of PRES significantly increases in lupus patients with PRES-like neurological symptoms when the characteristic symptoms of PRES are promptly recognized, with special attention paid to the recent start or augmentation of immunosuppressive medicines. Thus, in addition to urgent neuroimaging, a thorough physical examination, checking for the focal neurological deficit, and a mental state examination are always the cornerstones in diagnosing PRES. The history should be carefully taken, with questions about headaches that recently started, seizures, visual disturbance, and recent changes in medication. After carefully ruling out other illnesses, an MRI of the brain with PRES-specific abnormalities leads to the correct diagnosis of PRES. Immunosuppressive therapy should be started or increased along with the appropriate auxiliary treatment such as antiepileptics or anti-psychotics for lupus patients who present with neuropsychiatric symptoms without particularly abnormal focal neurological signs, blood screening, cerebrospinal fluid (CSF) analysis, or neuroimaging findings because of the likely diagnosis of NPSLE [ 24 ]. The likelihood of corticosteroid-induced psychosis should also be considered in patients with active lupus who have just started taking or increased their corticosteroids, especially if the corticosteroid dose is high. Reducing the dosage of corticosteroids while closely monitoring neuropsychiatric symptoms and lupus activity is frequently effective in differentiating between NPSLE and corticosteroid-induced psychosis [ 24 ]. To rule out further central nervous system (CNS) disorders such as infection, demyelination, cerebral vasculitis, and subarachnoid hemorrhage, lumbar puncture and CSF analysis can be used [ 24 ]. Thrombotic thrombocytopenic purpura (TTP) should also be considered in lupus patients who report an altered neurological state, fever, microangiopathic hemolytic anemia, and renal impairment and should be further treated with plasmapheresis [ 24 ]. One of the pathological characteristics of NPSLE is the activation of endothelial cells. It typically happens after exposure to IL-1 and TNF-alpha; local release of IL-1 and IL-6 may worsen it. The blood-brain barrier (BBB) is damaged, and plasma leakage occurs in SLE patients with high SLEDAI due to elevated serum levels of TNF-alpha and other pro-inflammatory cytokines that may activate astrocytes and intracranial artery endothelial cells to create nitric oxide (NO) [ 20 ]. Limitations There is very little information about PRES, and even less information is accessible on SLE patients diagnosed with PRES. Even after a thorough search, no randomized clinical trials could be retrieved. Heterogeneity between studies is another limitation given we were doing a systematic review and the studies included had different study designs, number of participants, etc. It is also important to acknowledge the limitation imposed by relatively small sample sizes of patients in the included studies since that limits the statistical power. Regarding the connection between PRES-SLE and hypertension, although hypertension was found to be the predominant association in PRES-SLE patients, other parameters, such as the severity of SLE illness, immunosuppressant use, and lupus nephritis, were also found to be associated with it. Therefore, a study that compared all the relevant variables and reduced them to one primary explanation was deficient, which could have influenced the findings. Larger prospective studies are required to define the etiology of PRES in SLE patients' treatment options and the risk of a bad outcome among them.
JKB selected the research topic, contributed to the extraction of relevant articles for the review, analysis of data, creation of tables and figures, and drafted the article. JKB and KJ were involved in reviewing all the articles selected for the systematic review and any disagreements about eligibility were discussed with all other co-authors and resolved by general agreement. KJ and PKM helped in drafting the introduction and methodology section. SN and SK helped with drafting the discussion section. UC and VAO helped proofread the article and helped with drafting the article results. AP checked for errors and participated in editing the abstract and the article draft. EMA provided suggestions and ensured all journal guidelines and requirements were followed. SK participated in the study design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50620
oa_package/1d/46/PMC10788701.tar.gz
PMC10788702
38226074
Introduction Gallbladder cancer (GBC) is a very aggressive tumor with a dismal outcome, with a mean overall survival (OS) of six months and a five-year OS rate of 5% [ 1 ]. Approximately 60% of the patients with GBC have tumor in the fundus, 30% in the body, and 10% in the neck [ 2 ]. Seventy-five percent of the patients have unresectable or metastatic disease and are not candidates for curative resection [ 3 ]. However, the prognosis after surgical resection remains poor, with a five-year survival rate between 10% and 90% depending on disease factors such as tumor grade and stage [ 4 ]. The standard surgical procedure for GBC is radical cholecystectomy with or without bile duct resection. Routine extrahepatic bile duct (EHBD) resection, previously recommended for nodal clearance, is now performed selectively to achieve negative margins, as studies have shown no impact on survival but increased morbidity [ 4 - 7 ]. The bile duct resection is only performed if gross direct extension or microscopic involvement of the cystic duct margin (CDM) is examined intraoperatively [ 3 ]. Patients with macroscopic infiltration of hepatoduodenal ligament (HDL) or EHBD often present with obstructive jaundice [ 8 ]. Though the CDM is usually sent for frozen biopsy during surgery for GBC in most centers, there are no studies in the literature regarding its routine use irrespective of the tumor location. Positive CDM during index cholecystectomy in incidental gallbladder cancer (IGBC) is a significant negative prognostic factor [ 9 ]. Patients with positive CDM in IGBC are more likely to have residual disease in the EHBD (positive cystic duct, 42.1% vs. negative cystic duct, 4.3%) [ 10 ]. No studies demonstrate the impact of positive CDM on survival outcomes in resectable GBC. Thus, in our study, we aim to analyze the role of routine CDM frozen section in resectable GBC without macroscopic EHBD infiltration and jaundice regardless of the tumor location and its impact on survival, to evaluate predictive factors for CDM positivity, and also to assess clinicopathologic characteristics and outcomes of the anatomical location of the tumor in GBC.
Materials and methods This was a retrospective observational case-control study conducted at Govind Ballabh Pant Institute of Postgraduate Medical Education and Research, Maulana Azad Medical College, Delhi University, New Delhi, India. A total of 158 patients diagnosed with operable GBC were treated from May 2009 to March 2021. The study included patients with resectable GBC without clinical obstructive jaundice (n=105). Patients with IGBC (n=42), patients with obvious common hepatic duct (CHD)/common bile duct (CBD) involvement diagnosed preoperatively or intraoperatively (n=6), patients who received neoadjuvant therapy (n=3), and patients with metastatic disease diagnosed intraoperatively (M1, n=2) were excluded from the study (n=53). Patients were divided into two groups based on frozen section analysis: (1) CDM-negative and (2) CDM-positive. Propensity score matching (PSM) and analysis were performed for variables such as Eastern Cooperative Oncology Group (ECOG) performance status, tumor size, tumor-node-metastasis (TNM) stage, and adjuvant chemotherapy (Figure 1 ). Outcomes The primary outcome was to analyze the role of routine frozen biopsy of CDM in resectable GBC without obstructive jaundice. Secondary outcomes were to analyze the impact of positive CDM status on survival, to evaluate preoperative and intraoperative predictive factors for CDM positivity, and to evaluate clinicopathologic characteristics and outcomes of the anatomical location of the tumor. Postoperative complications were graded according to the Clavien-Dindo classification system [ 11 ]. The TNM staging of GBC was based on the 8th edition of the American Joint Committee on Cancer (AJCC) [ 12 ]. OS was defined as the time from surgery to death from any cause or last follow‐up (censored). Recurrence-free survival (RFS) was defined as the time from surgery to recurrence (diagnosed clinically and/or by imaging). Definition of tumor location The gallbladder (GB) is anatomically divided into fundus, body, infundibulum, and neck [ 13 ]. The neck is defined as the portion between the infundibulum and the cystic duct. Patients in the same study cohort were divided into two groups for subgroup analysis based on tumor involvement: (1) GBC with neck involvement and (2) GBC not involving the neck. The first group included patients with tumors involving the neck (e.g., tumor in the neck, tumor in the body and neck, and tumor replacing the GB and involving the neck). The second group included patients with tumors not involving the neck (e.g., tumor in the fundus, tumor in the body, and tumor in the fundus and the body). Histologically, the neck was defined as the area adjacent to the cystic duct having tubulo-alveolar mucus glands [ 14 ]. The fundus/body region was defined as the region distal to the neck where mucus glands are absent [ 14 ]. Management protocol All patients with GBC underwent clinical examination, followed by routine blood investigations with tumor markers such as carbohydrate antigen 19-9 (CA 19-9) and carcinoembryonic antigen (CEA), ultrasound (USG) of the abdomen, and contrast-enhanced computed tomography (CECT) of the chest and abdomen. The tumor site and extent have been defined by the CECT abdomen. Endoscopic ultrasound (EUS) with fine needle aspiration cytology (FNAC) was performed selectively in patients with enlarged inter-aortocaval (IAC) or para-aortic lymph nodes diagnosed on CECT abdomen. Patients with positive EUS-FNAC were referred for definitive chemotherapy and excluded from the study. Patients with resectable GBC were posted for surgical resection. Surgical procedure Staging laparoscopy was performed in patients with resectable disease. Patients with intraoperatively diagnosed metastatic disease were excluded from the study. Patients without evidence of metastasis underwent further assessment for resectability. If resectable, radical cholecystectomy was performed, including en bloc cholecystectomy with either anatomical segment 4b and 5 resection or non-anatomical 2 cm wedge resection and lymphadenectomy (involving stations 8, 12, and 13). Resections were performed laparoscopically whenever feasible, as described by Nag et al. [ 15 , 16 ]. Conversion to open laparotomy was performed whenever required (e.g., technical difficulty, intraoperative bleeding, and multi-visceral resection). CDM frozen section analysis was performed in all patients regardless of tumor location. According to CDM frozen status, patients were divided into CDM-positive and CDM-negative groups. In patients with positive CDM, EHBD excision with Roux-en-Y hepaticojejunostomy (RYHJ) was performed. Those with adjacent organ involvement underwent multi-visceral resections. If unresectable, then patients were planned for palliative chemotherapy. Follow-up Patients were followed up in the outpatient department (OPD) every three months with history, physical examination, CEA and CA 19-9, and USG of the abdomen in the first year and then every six months after that. CECT abdomen was carried out every six months during the first two years and then annually. After a multidisciplinary tumor board discussion, patients were planned for adjuvant chemotherapy based on histopathological TNM stage (>T2 and/or N+ disease). Recurrence was diagnosed either clinically or radiologically. Statistical analysis Data processing was performed using IBM SPSS Statistics for Windows, Version 26.0 (Released 2019; IBM Corp., Armonk, New York, United States). Data were described using range, mean±standard deviation, median, frequencies (number of cases), and relative frequencies (percentages) as appropriate. To determine whether the data were normally distributed, a Kolmogorov-Smirnov test was used. Comparison of quantitative variables between study groups was performed using the Student t test and Mann-Whitney U test for parametric and non-parametric data, respectively. Chi-squared test was performed to compare categorical data, and Fisher's exact test was performed when the expected frequency was less than five. Survival curves were constructed using the Kaplan-Meier method and compared using the log-rank test. We performed Cox regression analysis to identify the preoperative and intraoperative predictive factors for CDM positivity. Variables found to be significant in the univariate analysis were also included in the multivariate analysis. A p-value (two-sided) of <0.05 is considered statistically significant.
Results A total of 14 patients (13.3%) had positive CDM among 105 resectable GBC patients. Patients were divided into CDM-positive (n=14) and CDM-negative (n=91) in the unmatched study population. After PSM, 27 patients (CDM-positive=14, CDM-negative=13) were included in the analysis. A comparative subgroup analysis was performed between patients with GB neck tumors (n=30) and GB fundus/body tumors (n=75) in the same study population (N=105) to evaluate clinicopathologic characteristics and outcomes of the anatomical location of the tumor. Unmatched cohort In the unmatched study population, baseline demographics were comparable between the two groups, except for a higher ECOG performance status in the CDM-positive patients. Patients with CDM positivity had a higher rate of cholangitis and a higher rate of tumors involving the neck of the GB on CT. Patients with CDM positivity had a significantly longer duration of surgery, greater intraoperative blood loss, and higher morbidity (higher Clavien-Dindo grade of complications). All patients with CDM positivity underwent CBD excision with RYHJ. One patient with negative CDM underwent CBD excision with RYHJ due to a lymph node mass encasing CBD (Table 1 ). All patients in this study population had negative resection margins (R0). Pathological features such as perineural involvement, number of positive lymph nodes, liver involvement, and CBD involvement were significantly higher in CDM-positive patients. CDM-positive patients had higher tumor stage but were not statistically significant (p=0.055). CBD involvement was present in 50% of patients with positive CDM. One of 14 patients with positive CDM had a tumor that did not involve the neck region (with fundus/body involvement). Although the recurrence rate was higher in CDM-positive patients, the recurrence sites were similar between the groups. Median RFS (31.5 months vs. 12 months, p=0.001) and OS (36 months vs. 20 months, p=0.001) were significantly lower in CDM-positive patients (Table 2 , Figure 2 , Figure 3 ). Matched cohort In the matched study population, the baseline demographic, clinical, and biochemical parameters were comparable between groups except for tumor location on CT. Patients with positive CDM had a significantly higher rate of neck tumors (p=0.001). The types of surgeries performed were similar in the groups, except patients with CDM positivity required an additional CBD excision with RYHJ (p=0.001). The operative time was comparable, but intraoperative blood loss was significantly higher in CDM-positive patients (p=0.026). The distribution of Clavien-Dindo complication grades was similar between groups (Table 1 ). Pathological features such as tumor type, tumor grade, lymphovascular invasion (LVI), perineural invasion (PNI), number of positive lymph nodes, and TNM stage were comparable except for tumor location. Patients with CDM positivity had a longer hospital stay (p=0.002). The groups were comparable in terms of adjuvant chemotherapy, recurrence rate, and site of recurrence. Median RFS (24 months vs. 12 months, p=0.30) and OS (24.5 months vs. 20 months, p=0.417) were comparable between groups (Table 2 , Figure 4 , Figure 5 ). Predictive factors for CDM positivity In univariate analysis, the presence of cholangitis, the location of the mass on CT, regional lymphadenopathy and IHBRD on CT, the intraoperative location of the mass, and liver infiltration were significant factors predicting CDM positivity. On multivariate analysis, preoperative location of mass on CT and intraoperative location of mass were independent predictive factors for CDM positivity (Table 3 ). Subgroup analysis A comparative subgroup analysis was performed between patients with GB neck tumors (n=30) and without GB neck tumors (n=75) in the same study population (N=105). Patients with neck tumors had higher bilirubin and alkaline phosphatase (ALP) levels preoperatively. 43.3% of patients with tumor in the neck region and 1.33% of patients with tumors without neck involvement had a positive CDM. One of 14 (7.14%) patients with positive CDM had a tumor in the fundus/body region, and the rest involved the neck. Intraoperatively, tumors involving the neck had higher CDM positivity and longer operative time. Patients with neck tumors had a higher T stage (p=0.011), a higher rate of the liver (p=0.001) and CBD involvement (p=0.002), and a higher recurrence rate (p=0.002). Median RFS (30 months vs. 17 months, p=0.012) and OS (36 months vs. 24 months, p=0.048) were significantly lower in neck tumors (Table 4 ).
Discussion In GBC, curative resection with an R0 resection margin remains the best chance for long-term survival. Resection of the EHBD in GBC should be selective and reserved only for patients with gross involvement by direct infiltration or microscopic involvement of the intraoperatively assessed frozen status of the CDM and HDL lymph nodes densely adherent to EHBD. Since there is no standardized protocol regarding intraoperative CDM frozen sections, some centers routinely use CDM frozen biopsy, while others do not due to the unavailability of frozen sections and analysis. There are no studies in the literature on the routine use and significance of CDM frozen sections. In the current study, we analyzed CDM frozen sections in all patients with resectable GBC regardless of tumor location. In the present study, out of 14 patients with positive CDM, one patient had a tumor in the fundus/body region (distant from the cystic duct) and had CBD involvement in histopathological examination (HPE). This explains that the patient had a non-contiguous spread of the primary tumor to the cystic duct and/or EHBD. Shimizu et al. [ 17 ] described four patterns of spread to the HDL and showed that tumor spread in GBC might be non-contiguous (type 3 spread). Pawlik et al. [ 10 ] conducted a study to analyze the incidence of residual disease during re-resection in IGBC. In patients with microscopically positive CDM, residual disease in the resected CBD was 42.1%. Another study by Nakata et al. [ 18 ] reported a 53.8% incidence of CBD infiltration in patients with cystic duct spread. Similarly, in the current study, seven patients (50%) of 14 patients with positive CDM frozen showed EHBD involvement on histopathology. As reported in the literature, patients who required additional bile duct resection had significantly higher blood loss, longer duration of surgery and postoperative hospital stay, and increased risk of postoperative morbidity [ 19 , 20 ]. In the present study, positive CDM requiring EHBD resection had significantly higher intraoperative blood loss (375 vs. 200 ml, p=0.026) and longer hospital stay (11.5 vs. five days, p=0.002), while postoperative morbidity was comparable (p=0.722). According to several studies, a positive CDM in GBC significantly predicts OS [ 9 , 18 ]. There are two mechanisms to explain the poor prognosis of positive CDM. First, the cystic duct has a lymphatic network that leads to HDL lymph nodal spread. Second, cancer can disseminate to EHBD by superficial spread or the lumen of the cystic duct. Vega et al. [ 9 ] performed a retrospective study to assess the initial CDM status in IGBC as a prognostic factor. They concluded that patients with positive CDM had lower OS than patients with negative CDM. Positive CDM was strongly associated with CBD recurrence. They also stated that patients who underwent EHBD resection for positive margins had similar OS to patients with negative CDM. Nakata et al. [ 18 ] reported that the patients with cancer spreading to the cystic duct in GBC had significantly lower three- and five-year survival rates than patients without cancer spread to the cystic duct. In the present study, after PSM, median RFS (24 months vs. 12 months, p=0.30) and OS (24.5 months vs. 20 months, p= 0.417) were comparable between groups (negative CDM vs. positive CDM). These results suggest that resection margin status, TNM stage, and adjuvant chemotherapy affect survival outcomes in GBC rather than CDM status. The present study performed univariate and multivariate analyses to determine the predictive factors for CDM positivity. After multivariate analysis, a tumor in the neck region of the GB (diagnosed preoperatively or intraoperatively) was identified as an independent predictive factor for positive CDM. Based on this finding, the application of CDM frozen biopsy should be mandatory in resectable GBC without obstructive jaundice involving the neck region. Given the rare possibility of a positive CDM, it may be avoided in tumors involving the fundus/body of the GB. According to the literature, the tumor location in GBC can significantly influence the outcome [ 14 , 21 , 22 ]. T2 tumors involving the hepatic side of the GB (T2b) are more aggressive and have a worse prognosis than T2 tumors involving the peritoneal side (T2a) [ 21 ]. Similarly, Leigh et al. [ 14 ] performed a retrospective study to determine the significance of the anatomical location of tumors in GBC for the outcome. They reported that compared to fundus/body tumors, neck tumors have a higher rate of preoperative jaundice, significantly more EHBD resection and bile duct involvement in HPE, a higher rate of PNI, a comparable TNM stage distribution, a significantly shorter OS, and thus significantly worse prognosis. Kurahara et al. [ 23 ] also concluded that GBC with neck involvement had a higher rate of PNI, a more significant number of positive lymph nodes, and a worse prognosis than fundus/body tumors. The present study found that GBC involving the neck had significantly higher levels of bilirubin and ALP. Neck tumors had a higher incidence of PNI (50% vs. 25%, p=0.021), a positive CDM requiring CBD excision (43.3% vs. 1.3%, p=0.001), and a CBD involvement (20% vs. 1.3%, p=0.002). Both groups had a comparable TNM stage distribution. Neck tumors had a significantly shorter RFS and OS (17 vs. 30 months, p=0.012, and 24 vs. 36 months, p=0.048, respectively). These results are consistent with the previous studies. The current study has several limitations. First, this study was inherently limited due to its retrospective nature and associated biases. Second, the patients with positive CDM were relatively small and therefore at risk of being underpowered. Third, the study covers a long period of time in which the management protocol, including adjuvant chemotherapy (regimen), was changed. Therefore, future prospective studies with a larger cohort are needed to validate these results. Despite its limitations, this study illustrates the role of routine use and the significance of CDM frozen biopsy in GBC. It provides a subgroup analysis to determine the prognostic utility of tumor location.
Conclusions Routine use of frozen biopsy of the CDM in patients with resectable GBC without jaundice, regardless of tumor location, can be avoided. Its use can be selectively preferred in patients with GBC involving the neck since CDM positivity is only found in one in a hundred resectable non-neck tumors and tumor location is found to be an independent predictive factor for CDM positivity. However, further prospective trials with a larger cohort will provide rigid results. Positive CDM has comparable survival outcomes to negative CDM, providing a similar R0 resection rate and TNM stage. However, neck tumors have a worse prognosis than non-neck tumors.
Background In gallbladder cancer (GBC), extrahepatic bile duct (EHBD) resection is selectively performed if gross direct extension or microscopic involvement of the cystic duct margin (CDM) is detected. Although CDM is usually sent for frozen biopsy intraoperatively in most centers, there are no studies regarding the routine use of CDM frozen biopsy irrespective of the tumor location and paucity of literature regarding the impact of CDM status on recurrence-free and overall survival in GBC. The presence of obstructive jaundice in GBC usually indicates the involvement of EHBD or cystic duct-bile duct junction. The present study aimed to analyze the necessity of routine CDM frozen biopsy in patients with resectable GBC without jaundice, regardless of the tumor location. The impact of positive CDM on survival was also evaluated. Methods This retrospective observational case-control study was conducted from May 2009 to March 2021 and included 105 patients with resectable GBC without macroscopic EHBD infiltration and jaundice. Patients were divided into CDM-negative (n=91) and CDM-positive (n=14) groups. Propensity score matching was performed for variables such as performance status, tumor size, tumor-node-metastasis (TNM) stage, and adjuvant chemotherapy. After propensity score matching, 27 patients (CDM-negative=13, CDM-positive=14) were included. The primary outcome was to analyze the role of routine CDM frozen biopsy regardless of tumor location, and secondary outcomes were to study the impact of positive CDM status on survival and evaluate predictive factors for CDM positivity. A subgroup analysis was conducted to assess clinicopathologic characteristics and outcomes of the anatomical location of the tumor. Results Of 105 patients, 91 had negative CDM, and 14 had positive CDM. Among 14 patients with positive CDM, only one patient had a tumor in the fundus/body, and the remaining had a tumor involving the neck. All CDM-positive patients underwent bile duct excision with hepaticojejunostomy. Common bile duct (CBD) involvement was present in 50% of patients with positive CDM in the final histopathological examination. In the matched population, patients with positive CDM had a significantly higher rate of neck tumors (p=0.001). Recurrence-free survival (24 vs. 12 months, p=0.30) and overall survival (24.5 vs. 20 months, p=0.417) were comparable between CDM-negative and CDM-positive groups, respectively. On multivariate analysis, preoperative and intraoperative tumor location were independent predictive factors for CDM positivity. On subgroup analysis, 30 patients had tumor involving the neck of the gallbladder, and the remaining 75 had at the fundus and body of the gallbladder. Neck tumors had inferior recurrence-free survival (17 vs. 30 months, p=0.012) and overall survival (24 vs. 36 months, p=0.048) compared to non-neck tumors. Conclusions Routine use of CDM frozen analysis in patients with resectable GBC without jaundice, regardless of tumor location, can be avoided. It can be selectively preferred in patients with GBC involving the neck since tumor location is found to be an independent predictive factor for CDM positivity. Positive CDM has comparable survival outcomes to negative CDM, providing a similar R0 resection rate and tumor stage. However, neck tumors have a worse prognosis than non-neck tumors.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50585
oa_package/bb/91/PMC10788702.tar.gz
PMC10788703
38226101
Introduction Gerstmann-Sträussler-Scheinker syndrome (GSS) is an autosomal dominant neurodegenerative prion disease characterized by the deposition of prion protein (PrP) immunopositive amyloid plaques in cerebral and cerebellar parenchyma. There are at least 19 identified missense mutations in the PRNP gene that present with a GSS-like phenotype. In addition, the genotype at codon 129 of PRNP-commonly methionine or valine may alter the clinical characteristics of the disease [ 1 ]. Pathohistology of GSS reveals cerebral hemisphere and cerebellar vermis atrophy. PrP amyloid deposits-which are the protein product of PRNP-are found in various brain regions including the cerebral cortex, basal ganglia, and cerebellum. Neurofibrillary tau tangles with a paired helical structure, similar to those seen in Alzheimer’s disease, may also be seen. Spongiform changes of the brain are present in some cases but with lower prevalence than tau tangles [ 1 ]. The age of symptom onset usually ranges between the third and sixth decade of life. Symptoms may progress as rapidly as several months, or more slowly over the course of one to two decades [ 1 ]. Progressive cerebellar ataxia, pyramidal signs, and cognitive decline are the most common symptoms, though other features may co-occur and differ within families and carriers of the same mutations, including parkinsonism [ 1 ]. Two reported cases of D202N-related GSS demonstrated abnormal (I-123)-FP-CIT single-photon emission computed tomography (DaT-SPECT) imaging, but this imaging modality has not been reported in other genetic subtypes of GSS, including F198S. D202N usually presents around the eighth decade of life with axial akinetic rigidity, downgaze impairment, dementia, and hyperreflexia. F198S generally presents with a wider variability in age of onset. Eye movement abnormalities, memory problems, and psychotic depression are common initial manifestations. Here, we describe a case of F198S mutation GSS manifesting levodopa-responsive parkinsonism, levodopa-induced dyskinesia, and an abnormal DaT-SPECT. We demonstrate the importance of considering GSS in the workup of atypical parkinsonism with a strong family history. Parts of this article were presented as a poster at the 2022 International Parkinson and Movement Disorder Society International Congress in Madrid, Spain.
Discussion Of the approximately 390 cases of GSS reported to date, the third most prevalent mutation is F198S-129V. Patients often present with clumsiness, ataxia, dysarthria, parkinsonism, short-term memory loss, and cognitive impairment. Early-stage MRI may show cerebellar atrophy. VV homozygosity at codon 129 is associated with an up to 10 year earlier age at onset compared to MV heterozygosity. Pathohistology reveals PrP amyloid, PrP diffuse plaques, and severe tau neurofibrillary pathology. PrP has been found in the stratum lacunosum-moleculare, hippocampal CA1, and subiculum-which are key regions for encoding memory. PrP has also been found in frontal, insular, temporal, and parietal cortices. In some cases, the neocortex has been found to contain alpha-synuclein immunopositive Lewy bodies. Interestingly, spongiform changes have not been noted in analyzed cases of F198S [ 1 ]. The abnormal DaT-SPECT in the present case may be consistent with previously reported PrP and tau accumulation in the caudate nucleus and putamen of patients with GSS [ 1 ]. To our knowledge, however, there is no consistent histopathological evidence of an isolated presynaptic dopamine deficit in GSS that could explain the patient’s response to levodopa. There have been two previously reported cases of D202N-related GSS that demonstrate abnormal DaT-SPECT (Table 1 ). Like our patient, the case reported by Plate et al. 2013 [ 2 ] had levodopa-responsive parkinsonism and memory loss. However, this patient had daytime sleep attacks and fluctuating cognitive dysfunction resembling dementia with Lewy bodies, and the presence or absence of treatment-related dyskinesia was not reported. The case reported by Baiardi et al. 2020 [ 3 ] displayed similar clinical features as our patient as memory loss and ataxia-but levodopa response was not reported. With the limited reports of DaT-SPECT results in GSS cases, we also examined other prion diseases, identifying 13 cases of reported Creutzfeldt-Jakob disease (CJD) (Table 1 ). Two cases of CJD were familial, one was variant, and the remainder of the 13 were sporadic. All except three cases of sporadic CJD showed abnormal DaT-SPECT. Both cases of familial CJD and one case of sporadic CJD were assessed for levodopa responsiveness, but all three yielded negative results. There are several limitations of the present case. First, clinical information is not available for the patient’s relatives who had a similar presentation. Assessing levodopa response and obtaining DaT-SPECT results could suggest the prevalence or consistency of the present findings in F198S GSS cases. Additionally, cutaneous biopsy for phosphorylated alpha-synuclein could have been considered to rule out an unlikely concomitant synucleinopathy. More quantitative measures of clinical improvement in parkinsonism, such as Unified Parkinson’s Disease Rating Scale motor scores, could have also been collected.
Conclusions To our knowledge, this is the first reported case of a patient with GSS that manifests a complement of abnormal DaT-SPECT, levodopa-responsive parkinsonism, and levodopa-induced dyskinesia. It is also the first report of DaT-SPECT imaging in F198S GSS. GSS is likely to be underdiagnosed due to its rarity and variable clinical presentations. This case highlights the idea that GSS can resemble atypical parkinsonism both clinically and with imaging. Taking a salient family history and other clinical features into consideration, GSS should be added to the differential diagnoses of such patients.
Gerstmann-Sträussler-Scheinker syndrome (GSS) is an autosomal dominant neurodegenerative disease caused by point mutations in the prion protein gene (PRNP) . While variable, the clinical presentation typically encompasses progressive cerebellar ataxia, pyramidal signs, and cognitive impairment. Here, we report a case of F198S-associated GSS manifesting levodopa-responsive parkinsonism, levodopa-induced dyskinesia, and an abnormal (I-123)-FP-CIT single-photon emission computed tomography (DaT-SPECT). A 66-year-old male patient presented with six years of progressive recall and language impairment, with an initial impression of primary progressive aphasia. Over time he developed progressive cerebellar ataxia and akinetic parkinsonism. There was a family history of ataxia in multiple family members. Levodopa was prescribed up to 450 mg per day without benefit. Genetic testing at age 69 revealed a heterozygous F198S mutation in the PRNP gene, with MV heterozygosity at codon 129. At age 70, he developed mild generalized choreiform dyskinesia. Levodopa was discontinued, resulting in the resolution of dyskinesia with a concomitant marked worsening of akinetic parkinsonism. DaT-SPECT demonstrated bilaterally reduced putaminal binding. This case highlights that GSS can resemble atypical parkinsonism both clinically and with DaT-SPECT imaging. Taking a salient family history and other clinical features into consideration, GSS should be added to the differential diagnoses of such patients.
Case presentation A 66-year-old right-hand dominant male with a past medical history of essential hypertension and developmental speech disorder presented with a six-year history of short-term memory loss and gait abnormalities. He reported mild dysphagia and shaky handwriting. His partner reported that his previously outgoing nature had become reserved, apathetic, and anxious. He had no history of smoking, alcohol use, or drug use, and no dream enactment or hyposmia. His family history was positive for a phenotype encompassing progressive parkinsonism, cerebellar ataxia, and cognitive impairment in his maternal grandfather, two maternal aunts, mother, and sister. He also had two unaffected sisters and two children of unknown status (Figure 1 ). Initially, the examination was notable for progressive expressive language deficits with effortful speech and multidomain mild cognitive impairment without frank cerebellar ataxia or parkinsonism. On detailed neuropsychological testing, difficulties were noted in multiple cognitive domains including select aspects of learning and memory (verbal word list learning, working memory); language (verbal associative fluency, spelling, comprehension for complex commands); and tasks dependent upon psychomotor speed and dexterity. Compared to his prior exams, the patient’s memory had been stable; repetition and naming had improved; and cognitive flexibility had improved (although improvement on this latter task may be due to practice effects). The decline was found with verbal associative fluency and probably occurred with spelling. The pattern of performance was consistent with a disruption of functions of predominantly left frontal-temporal or left subcortical regions (basal ganglia, thalamus). While primary progressive aphasia remained in the differential diagnosis, subcortical dementia, or early senile dementia of Alzheimer's type was also considered. Magnetic resonance imaging (MRI) performed at an outside institution reportedly demonstrated mild diffuse atrophy and mild periventricular white matter disease, though images were unavailable for direct review for our evaluation. Over time, he developed progressive cerebellar gait, appendicular ataxia, and bradykinesia. A trial of carbidopa/levodopa 25/100 mg, up to one-and-a-half tablets three times per day, did not provide a clear motor benefit or emergence of dyskinesia. An Athena dominant ataxia panel did not identify any pathogenic abnormalities in the tested genes. Subsequent whole exome sequencing identified a pathogenic, heterozygous c.593T>C mutation (F198S) in the PRNP gene, with MV heterozygosity at codon 129. At age 71, the patient exhibited supranuclear gaze palsy. Specifically, there were saccadic intrusions consisting of square wave jerks. Extra-ocular movements were notable for saccadic pursuits and restricted vertical gaze upward and downward. Vertical gaze was intact with oculocephalic reflex testing. Saccade generation on optokinetic nystagmus testing was impaired both horizontally and vertically. There was no nystagmus. The patient also exhibited generalized chorea, asymmetric irregular jerky rest tremor, and multifocal and action-induced myoclonus. Levodopa was discontinued given the emergence of bothersome chorea and the apparent lack of significant motor benefit. Two weeks after discontinuation of levodopa, chorea was absent but his akinesia and gait had worsened. After reinitiation of levodopa at 100 mg three times per day, his akinesia improved and his choreiform dyskinesia returned. At this stage, DaT-SPECT revealed absent binding in the bilateral putamina and markedly reduced binding in the left caudate nucleus more than in the right caudate nucleus (Figure 2 ).
We are sincerely grateful to the patient and his family for their permission to present his clinical details in this report.
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50594
oa_package/9c/d0/PMC10788703.tar.gz
PMC10788704
38226102
Introduction and background Cardiac arrest in pediatric patients is a critical medical emergency that demands immediate attention and specialized care. The significance of pediatric cardiac arrest lies not only in its relatively rare occurrence but also in the profound impact it can have on the affected children and their families. Advances in resuscitation techniques over the years have played a pivotal role in improving outcomes, offering a glimmer of hope where once the prognosis seemed bleak [ 1 ]. Pediatric cardiac arrest represents a distressing and life-threatening event, necessitating an urgent and coordinated medical response. Despite its infrequency compared to adult cardiac arrest, the unique challenges posed by pediatric cases make them particularly complex. Children can experience cardiac arrest due to various underlying conditions, including congenital heart diseases, respiratory illnesses, and traumatic injuries. The consequences of pediatric cardiac arrest extend beyond the immediate medical implications, affecting the long-term well-being and quality of life of survivors [ 2 ]. Understanding the distinct nature of pediatric cardiac arrest is crucial for healthcare providers, as it requires tailored approaches to resuscitation, post-resuscitation care, and rehabilitation. Factors such as age, weight, and underlying health conditions contribute to the complexity of managing these cases. Hence, exploring the latest advancements in resuscitation techniques becomes imperative for enhancing the overall care and outcomes of pediatric patients facing cardiac arrest [ 3 ]. Over the years, there have been remarkable strides in resuscitation, driven by advancements in medical science and technology and a deeper understanding of the physiological responses during cardiac arrest. These innovations have significantly influenced the outcomes of both adult and pediatric resuscitation efforts. Key developments include improvements in cardiopulmonary resuscitation (CPR) techniques, the widespread availability of automated external defibrillators (AEDs), and the incorporation of evidence-based guidelines into clinical practice [ 4 ]. In the context of pediatric resuscitation, specialized considerations such as age-appropriate defibrillation energy levels, medication dosages, and airway management techniques have evolved to address the unique needs of children. The integration of simulation training and the emphasis on a multidisciplinary approach has further contributed to enhancing the preparedness of healthcare teams in managing pediatric cardiac arrest scenarios [ 5 ]. This review aims to comprehensively explore the landscape of post-resuscitation care in pediatric ICUs (PICUs) following cardiac arrest. By synthesizing the existing knowledge on this critical topic, we seek to provide healthcare professionals, researchers, and policymakers with a holistic understanding of the challenges and advancements in managing pediatric patients after successful resuscitation.
Conclusions In conclusion, the comprehensive review of post-resuscitation care in PICUs following cardiac arrest underscores the critical importance of a systematic and multidisciplinary approach. The immediate post-resuscitation phase requires careful navigation, from adherence to resuscitation guidelines and advances in therapeutic interventions to the nuanced management of neurological considerations, cardiovascular care, and respiratory support. Long-term outcomes and rehabilitation strategies are pivotal in optimizing the quality of life for pediatric survivors, emphasizing the need for individualized and family-centered care. A resounding call to action emerges as we reflect on these key findings. Continuous education, research initiatives, and quality improvement efforts are imperative, alongside strengthened multidisciplinary collaboration and advocacy for public awareness. By embracing these principles, the healthcare community can collectively contribute to ongoing advancements in pediatric post-resuscitation care, ultimately improving outcomes and fostering a culture of excellence in pediatric critical care.
This comprehensive review thoroughly examines post-resuscitation care in pediatric ICUs (PICUs) following cardiac arrest. The analysis encompasses adherence to resuscitation guidelines, advances in therapeutic interventions, and the nuanced management of neurological, cardiovascular, and respiratory considerations during the immediate post-resuscitation phase. Delving into the complexities of long-term outcomes, cognitive and developmental considerations, and rehabilitation strategies, the review emphasizes the importance of family-centered care for pediatric survivors. A call to action is presented, urging continuous education, research initiatives, and quality improvement efforts alongside strengthened multidisciplinary collaboration and advocacy for public awareness. Through implementing these principles, healthcare providers and systems can collectively contribute to ongoing advancements in pediatric post-resuscitation care, ultimately improving outcomes and fostering a culture of excellence in pediatric critical care.
Review Epidemiology of pediatric cardiac arrest The incidence rates, common causes, and demographic considerations of pediatric cardiac arrest are essential factors in understanding the epidemiology of this critical condition. The overall incidence rate for out-of-hospital cardiac arrest (OHCA) in the United States is 8.3 per 100,000 person-years, with a survival to hospital discharge rate of 11.3% [ 6 ]. Pediatric OHCA rates are reported to be 6-10 per 100,000 person-years, with only about 10% of patients surviving hospital discharge [ 7 ]. In-hospital cardiac arrest (IHCA) occurs in 2.6%-6% of children with cardiac disease [ 8 ]. Pre-arrest characteristics, including age and preexisting chronic conditions, are associated with varying cardiac arrest risk; infants and children with acquired and congenital heart disease are at higher risk [ 7 ]. Children with cardiac disease suffer cardiac arrest at rates of 2.6% to 6%, with corresponding survival ranging from 32-50.6% [ 8 ]. The survival rate for pediatric IHCA is higher, with 44-52% of patients surviving hospital discharge [ 7 ]. The incidence of pediatric cardiac arrest is relatively low, but the survival rates vary depending on whether the arrest occurs in the out-of-hospital or in-hospital setting. Pre-existing chronic conditions and congenital heart disease are significant risk factors for pediatric cardiac arrest. Resuscitation techniques in pediatric cardiac arrest Overview of Current Guidelines Guidelines for pediatric cardiac arrest emphasize a systematic and evidence-based approach to resuscitation. Organizations such as the American Heart Association (AHA) and the European Resuscitation Council (ERC) regularly update and disseminate these guidelines to healthcare providers. The recommendations encompass various aspects, including the recognition of cardiac arrest, the initiation of CPR, and the use of interventions such as defibrillation and medications [ 9 ]. Key components of the guidelines address the importance of high-quality CPR, the significance of early defibrillation in shockable rhythms, and the proper sequence of interventions in both basic and advanced life support. Understanding and adherence to these guidelines are critical for healthcare professionals to optimize the chances of successful resuscitation and improve overall patient outcomes [ 9 ]. Advances in CPR Recent years have witnessed notable advancements in CPR techniques, driven by a deeper understanding of the physiological mechanisms during cardiac arrest. High-quality CPR involves adequate chest compressions, proper ventilation, and minimizing interruptions to maintain perfusion. Innovations include real-time feedback devices that assist healthcare providers in optimizing compression depth, rate, and recoil [ 10 ]. Moreover, the concept of pit crew CPR, emphasizing a coordinated and efficient team approach, has gained prominence. This approach recognizes the importance of communication, role clarity, and swift transitions between chest compressors to minimize interruptions. Integrating technology, simulation training, and ongoing education further enhances healthcare teams' proficiency in performing high-quality CPR, especially in the dynamic and stressful environment of pediatric cardiac arrest [ 11 ]. Role of Automated External Defibrillators (AEDs) AEDs play a crucial role in the early management of pediatric cardiac arrest, particularly in cases involving shockable rhythms such as ventricular fibrillation or pulseless ventricular tachycardia. AEDs are designed to be user-friendly and can be employed by bystanders, including non-medical personnel. Their widespread availability in public spaces, schools, and healthcare settings has contributed significantly to improving the accessibility of early defibrillation [ 12 ]. In pediatric cases, the appropriate use of AEDs involves the consideration of pediatric pads or dose attenuators to deliver a shock suitable for the child's size and age. Integrating AED training in educational curricula and public awareness campaigns further enhances the community's ability to respond effectively to pediatric cardiac arrest scenarios [ 12 ]. Importance of Early Recognition and Intervention Early recognition of pediatric cardiac arrest is a critical determinant of successful resuscitation. Healthcare providers, caregivers, and bystanders must be trained to promptly identify signs of cardiac arrest, including the absence of a palpable pulse, unresponsiveness, and abnormal breathing patterns. Timely activation of the emergency medical system (EMS) and initiation of CPR are paramount in preserving brain and organ function during the critical minutes following cardiac arrest [ 13 ]. The window of opportunity for successful intervention narrows rapidly, emphasizing the need for a swift and coordinated response. Public awareness campaigns, community training initiatives, and integration of basic life support skills into educational curricula contribute to empowering individuals to recognize and respond to pediatric cardiac arrest effectively. Early intervention not only improves the likelihood of successful resuscitation but also plays a pivotal role in shaping long-term outcomes for pediatric patients who experience cardiac arrest [ 13 ]. Immediate post-resuscitation phase Stabilization and Monitoring Stabilization: Following spontaneous circulation return (ROSC), stabilization is the initial priority in post-resuscitation care for pediatric patients. This critical phase involves addressing immediate threats to life to ensure the patient's physiological stability. Key components include securing the airway to facilitate adequate ventilation, ensuring optimal oxygenation levels, and promptly addressing any underlying factors that may have precipitated the cardiac arrest. By swiftly addressing these aspects, healthcare providers aim to establish a foundation for further interventions and prevent the escalation of potential complications [ 14 ]. Monitoring: Continuous monitoring is paramount to closely track the pediatric patient's physiological parameters during the immediate post-resuscitation phase. Vital signs, including heart rate, blood pressure, respiratory rate, and oxygen saturation, are meticulously observed. Continuous ECG monitoring is employed to detect abnormalities in cardiac rhythm or signs of cardiac instability. This real-time surveillance enables healthcare teams to promptly identify and respond to changes, allowing for targeted interventions and adjustments to the ongoing management plan [ 15 ]. Post-cardiac arrest syndrome: Recognition and management of post-cardiac arrest syndrome are central in the immediate post-resuscitation phase. This syndrome encompasses a complex interplay of systemic ischemia-reperfusion response, myocardial dysfunction, and neurological injury. Early identification of the components of post-cardiac arrest syndrome is crucial for tailoring interventions to mitigate their impact on overall patient outcomes. This may involve strategies to optimize oxygen delivery to tissues, stabilize cardiac function, and prevent or minimize neurological injury. By addressing the multifaceted nature of post-cardiac arrest syndrome, healthcare providers aim to enhance the chances of a favorable recovery and improve the overall prognosis for pediatric patients who have undergone resuscitation [ 16 ]. Neurological Assessment Immediate evaluation: Neurological assessment takes precedence immediately after stabilization in post-resuscitation care for pediatric patients. The Glasgow Coma Scale (GCS) or age-appropriate alternatives are pivotal tools to gauge the patient's level of consciousness. This swift and systematic evaluation enables healthcare providers to ascertain the patient's neurological status rapidly. The findings from this initial assessment guide the subsequent course of interventions and play a crucial role in prognostication. Early identification of neurological deficits lays the foundation for targeted and individualized neuroprotective strategies [ 17 ]. Tools for assessment: The arsenal of assessment tools extends beyond traditional measures, incorporating advanced monitoring techniques to glean real-time insights into cerebral function. Continuous electroencephalography (EEG) provides ongoing monitoring of brain electrical activity, aiding in detecting seizure activity or other abnormal patterns. Near-infrared spectroscopy (NIRS) offers a non-invasive means to assess cerebral oxygenation and perfusion, contributing valuable information for managing potential neurological complications. These sophisticated tools enhance the precision of neurological assessment, enabling healthcare teams to tailor interventions based on dynamic and specific neurophysiological data [ 18 ]. Therapeutic hypothermia: Embracing therapeutic hypothermia represents a well-established and effective intervention to mitigate neurological injury in the immediate post-resuscitation phase. This evidence-based approach involves carefully lowering the patient's body temperature to a predefined target range. By inducing hypothermia, healthcare providers aim to confer neuroprotection and reduce the risk of secondary brain injury. This therapeutic strategy is particularly relevant in the context of post-cardiac arrest care, where the potential for neurological compromise is heightened. The meticulous application of therapeutic hypothermia underscores a proactive approach to preserving neurological function and improving overall outcomes in pediatric patients who have experienced cardiac arrest [ 19 ]. Hemodynamic Management Fluid resuscitation: The cornerstone of hemodynamic management in the post-resuscitation phase is adequate fluid resuscitation. Maintaining hemodynamic stability is contingent on carefully titrated fluid administration. Close monitoring of central venous pressure (CVP) or other relevant hemodynamic parameters guides healthcare providers in assessing fluid responsiveness and tailoring fluid management strategies. This meticulous approach optimizes intravascular volume, prevents hypovolemia, and supports overall cardiovascular function [ 20 ]. Inotropic support: In persistent hemodynamic instability, initiating inotropic and vasopressor support becomes crucial to post-resuscitation care. Agents such as dopamine, epinephrine, and other inotropes are employed and titrated to address specific needs, optimizing cardiac output and blood pressure. The judicious use of inotropic support is guided by continuous monitoring of hemodynamic parameters and the patient's response to intervention. This dynamic approach ensures a tailored, patient-centered strategy to enhance myocardial contractility and systemic perfusion [ 21 ]. Goal-directed therapy: Hemodynamic management extends beyond immediate stabilization to incorporate goal-directed therapy. This proactive approach involves setting and achieving specific targets for perfusion, oxygen delivery, and other relevant hemodynamic parameters. Individualized for each patient, goal-directed therapy ensures that the unique responses and requirements of the pediatric patient guide interventions. Regular reassessment and adjustments to therapeutic strategies contribute to the precision of care, aiming for optimal tissue perfusion and overall hemodynamic balance. The implementation of goal-directed therapy reflects a nuanced and adaptive approach to post-resuscitation hemodynamic management [ 22 ]. Respiratory Support Mechanical ventilation: The initiation and management of mechanical ventilation represent pivotal aspects of post-resuscitation care for many pediatric patients. Following cardiac arrest, respiratory function may be compromised, necessitating the provision of mechanical support. Ventilator settings, including tidal volume, respiratory rate, and positive end-expiratory pressure (PEEP), are meticulously adjusted to achieve a delicate balance. The primary goal is to optimize oxygenation and ventilation while minimizing the risk of ventilator-induced lung injury. This involves considering factors such as lung compliance, airway resistance, and the overall respiratory status of the patient [ 23 ]. Oxygenation targets: Oxygenation targets play a crucial role in post-resuscitation care, and it is essential to consider factors such as permissive hypercapnia and the role of CO2. Notably, the data pertaining to pediatric arrests has a specific focus on cardiac patients, particularly those with pulmonary hypertension [ 24 ]. Continuous monitoring of arterial blood gases and oxygen saturation provides real-time insights into the patient's oxygen status. This information serves as a guide for healthcare providers to make necessary adjustments to oxygen delivery and ventilation strategies. The goal is to ensure that oxygen levels align with the unique needs of pediatric patients. Striking the right balance is paramount, as it helps avoid both hypoxia and hyperoxia, given the potential adverse effects associated with oxygen imbalances. This vigilant approach to oxygenation targets is critical in preventing complications and optimizing respiratory support during the post-cardiac arrest phase [ 24 ]. Temperature Management Normothermia maintenance: Temperature management emerges as a pivotal consideration in post-resuscitation care, with a primary goal of maintaining or achieving targeted therapeutic hypothermia. The choice between normothermia and therapeutic hypothermia is guided by the patient's clinical condition and established protocols. In cases where therapeutic hypothermia is indicated, external cooling devices or other targeted measures are employed to carefully lower the patient's body temperature to a predefined therapeutic range. Conversely, for patients in whom normothermia is preferred, proactive warming measures are implemented to prevent hypothermia and support overall physiological stability [ 25 ]. Prevention of hyperthermia: Actively preventing hyperthermia is integral to temperature management post-resuscitation, mainly due to its potential to exacerbate neurological injury. Continuous temperature monitoring ensures timely detection of deviations from the targeted range, prompting swift interventions to prevent hyperthermia. These interventions may include cooling or warming measures adjustments, pharmacological interventions, or other methods tailored to the individual patient's needs. By maintaining a meticulous approach to temperature regulation, healthcare providers aim to create an environment conducive to optimal recovery, minimizing the risk of complications associated with temperature fluctuations in the post-cardiac arrest phase [ 26 ]. Neurological considerations Brain Injury After Cardiac Arrest Ischemia-reperfusion injury: The intricate process of cardiac arrest and subsequent resuscitation introduces a unique set of challenges, prominently featuring ischemia-reperfusion injury. The initial deprivation of oxygen and nutrients during cardiac arrest sets the stage for a complex sequence of events. While vital for survival, the subsequent restoration of blood flow initiates a cascade that can lead to cellular damage and, critically, brain injury. Understanding the nuances of ischemia-reperfusion injury is pivotal in devising strategies to minimize its impact and optimize neurological outcomes in the aftermath of pediatric cardiac arrest [ 27 ]. Global hypoxic-ischemic injury: Pediatric patients, due to their developmental vulnerabilities, are particularly susceptible to global hypoxic-ischemic injury, a condition that can result in varying degrees of neurological impairment. Several factors, including the duration of cardiac arrest, the effectiveness of resuscitation efforts, and the underlying cause of the arrest influence the severity of the resultant brain injury. Recognizing the unique susceptibility of pediatric patients to global hypoxic-ischemic injury underscores the importance of tailored and vigilant post-resuscitation care to mitigate neurological consequences [ 28 ]. Secondary brain injury: In the post-resuscitation phase, efforts are concentrated on preventing or mitigating secondary brain injury. Beyond the immediate challenges posed by ischemia-reperfusion injury, factors such as fluctuations in blood pressure, oxygen levels, and body temperature become critical determinants of ongoing neurological damage. Careful management and monitoring of these variables are imperative to prevent the exacerbation of injury and support the delicate balance required for optimal neurological recovery. The emphasis on preventing secondary brain injury reflects a proactive and holistic approach to post-cardiac arrest care, aiming to safeguard against additional insults that could compromise neurological outcomes in pediatric patients [ 29 ]. Tools for Neurological Assessment GCS: The GCS stands as a foundational tool for neurological assessment, and for younger patients, age-appropriate alternatives are employed. This standardized and widely utilized scale provides a systematic method for evaluating the level of consciousness based on three key components: eye response, verbal response, and motor response. The GCS serves as a vital metric for gauging neurological function and is instrumental in the initial assessment of pediatric patients' post-cardiac arrest, providing valuable information to guide subsequent interventions and prognostication [ 17 ]. Continuous EEG monitoring: Continuous EEG monitoring emerges as a dynamic and essential tool for real-time insights into brain activity during post-cardiac arrest. This continuous surveillance is particularly valuable for detecting seizures, a common occurrence in the aftermath of cardiac arrest. Timely identification of seizures through continuous EEG monitoring allows for prompt intervention, preventing further neurological injury. The nuanced information provided by continuous EEG monitoring contributes to a more comprehensive understanding of the evolving neurophysiological status of pediatric patients, guiding healthcare providers in tailoring interventions to optimize neurological outcomes [ 30 ]. NIRS: NIRS represents a non-invasive and valuable tool employed in post-resuscitation care to monitor cerebral oxygenation levels. By utilizing near-infrared light, NIRS enables the assessment of oxygen delivery to the brain, offering insights into the adequacy of cerebral perfusion. Continuous monitoring with NIRS guides healthcare providers in real time, facilitating interventions to optimize cerebral oxygenation and prevent hypoxic events. This technology adds a layer of precision to neurological assessment, allowing for targeted and proactive measures to support optimal brain function in the critical post-cardiac arrest phase of pediatric care [ 31 ]. Therapeutic Hypothermia and Its Effectiveness The rationale for therapeutic hypothermia: Therapeutic hypothermia, a deliberate lowering of the body temperature, is grounded in the rationale of mitigating neurological injury following events such as cardiac arrest. This intervention is founded on the understanding that hypothermia can induce several neuroprotective mechanisms. By reducing the body's metabolic demands, therapeutic hypothermia minimizes the potential for cellular damage during periods of reduced oxygen supply. Additionally, hypothermia has anti-inflammatory properties, attenuating the inflammatory response that often accompanies neurological insults. Furthermore, by limiting the extent of secondary brain injury, therapeutic hypothermia emerges as a crucial strategy to optimize neurological outcomes in the aftermath of critical events [ 32 ]. Targeted temperature management: Implementing therapeutic hypothermia involves precisely controlling the patient's body temperature within a targeted and predefined range. This meticulous management is typically achieved through specialized cooling devices, such as cooling blankets or intravascular cooling catheters. Close monitoring of temperature parameters ensures that the targeted range is maintained consistently throughout the intervention. The duration and depth of hypothermia are crucial aspects determined based on individual patient factors, such as age, underlying health conditions, and the nature of the neurological insult. The careful orchestration of targeted temperature management reflects a commitment to optimizing the therapeutic effects of hypothermia while avoiding potential complications [ 25 ]. Effectiveness in pediatric patients: While therapeutic hypothermia has demonstrated efficacy in improving neurological outcomes in adults following cardiac arrest, its role in pediatric patients remains an area of active research and evolving understanding. Ongoing studies focus on determining optimal protocols, including the most effective duration and depth of hypothermia, and identifying specific patient selection criteria for pediatric populations. The challenges in extrapolating findings from adult studies to pediatrics emphasize the need for tailored approaches that consider pediatric patients' unique physiological and developmental aspects. As the evidence base continues to expand, therapeutic hypothermia holds promise as a neuroprotective intervention in pediatric post-resuscitation care, with ongoing efforts to refine its application to maximize benefits for this specific patient population [ 33 ]. Emerging Neuroprotective Strategies Pharmacological interventions: Ongoing research explores various pharmacological agents with potential neuroprotective properties in the post-cardiac arrest phase. These agents include antioxidants, anti-inflammatory drugs, and compounds targeting specific pathways implicated in ischemia-reperfusion injury. The quest for pharmacological interventions seeks to identify substances that can mitigate cellular damage, reduce inflammation, and enhance overall neuroprotection. Exploring pharmacological avenues holds promise for developing novel therapeutic strategies aimed at improving neurological outcomes in pediatric patients following cardiac arrest [ 34 ]. Stem cell therapy: Stem cell therapy has emerged as a captivating avenue for neuroprotection in the post-cardiac arrest period. With their regenerative potential and capacity to modulate inflammatory responses, stem cells offer a unique approach to enhancing neurological recovery. Clinical trials are underway to assess stem cell therapy's safety and efficacy in the pediatric population following cardiac arrest. Exploring this innovative therapeutic strategy represents a frontier in neuroprotection research, potentially revolutionizing approaches to post-resuscitation care by harnessing the reparative capabilities of stem cells [ 35 ]. Precision medicine approaches: Tailoring neuroprotective strategies based on precision medicine principles is an evolving frontier in post-cardiac arrest care. This approach involves customizing interventions to align with individual patient characteristics, genetic factors, and the specific etiology of cardiac arrest. By considering the unique aspects of each patient's condition, including genetic predispositions and underlying health factors, precision medicine aims to optimize the selection and application of neuroprotective interventions. This personalized approach can enhance the effectiveness of interventions, improve overall outcomes, and pave the way for a more individualized and targeted paradigm in the field of post-resuscitation care for pediatric patients [ 36 ]. Cardiovascular care post-resuscitation Management of Post-Cardiac Arrest Syndrome (PCAS) Understanding PCAS: PCAS constitutes a complex array of physiological disturbances that manifest after the ROSC. This syndrome encompasses a multifaceted interplay of responses, including a systemic ischemia-reperfusion reaction, myocardial dysfunction, neurological injury, and the lingering risk of persistent factors that precipitate cardiac arrest. A comprehensive understanding of PCAS is foundational for devising targeted interventions to address its diverse components and optimize overall patient recovery [ 37 ]. Multifaceted approach: Effectively managing PCAS necessitates a multifaceted and integrated approach. Hemodynamic stabilization is a cornerstone, involving meticulous attention to cardiovascular function to maintain stability. Targeted temperature management, a key component of post-resuscitation care, modulates the body's response and minimizes neurological injury. Respiratory support is equally integral, ensuring optimal oxygenation and ventilation. A coordinated effort to address the distinct facets of PCAS, encompassing cardiovascular, neurological, and respiratory considerations, is crucial. This holistic approach aims to mitigate the impact of PCAS synergistically, enhancing the chances of a favorable outcome for the pediatric patient [ 38 ]. Optimizing oxygen delivery: The optimization of oxygen delivery to tissues is central to mitigating the effects of PCAS. This involves a concerted effort to ensure adequate oxygenation, maintain hemodynamic stability, and address potential underlying cardiac arrest causes. By optimizing oxygen delivery, healthcare providers seek to minimize organ dysfunction risk and bolster vital organs' overall resilience post-resuscitation. This proactive strategy encompasses meticulous attention to ventilation parameters, hemodynamic management, and targeted interventions tailored to the specific needs of the pediatric patient. In doing so, the goal is to alleviate the impact of PCAS on organ function and enhance the potential for a positive recovery trajectory [ 39 ]. Pharmacological Interventions Inotropic and vasopressor support: In the presence of persistent myocardial dysfunction and hemodynamic instability post-resuscitation, the administration of inotropic agents, such as dobutamine, and vasopressors, such as epinephrine, becomes a critical component of management. These medications are pivotal in improving cardiac contractility and maintaining systemic blood pressure, supporting adequate organ perfusion. The careful titration of inotropic and vasopressor support is guided by continuous monitoring of hemodynamic parameters, ensuring a tailored approach to address the unique needs of the pediatric patient in the post-cardiac arrest phase [ 21 ]. Antiarrhythmic medications: Pharmacological interventions extend to managing arrhythmias that may arise post-resuscitation. Continuous cardiac monitoring guides the administration of antiarrhythmic medications to stabilize cardiac rhythm and prevent further complications. The selection of specific antiarrhythmic agents is based on the type and severity of arrhythmias observed. This proactive approach to rhythm management contributes to the overall stability of the cardiovascular system, minimizing the risk of adverse events related to cardiac dysrhythmias [ 40 ]. Volume management: Careful and judicious fluid resuscitation is essential in post-resuscitation care. Balancing fluid administration to optimize preload and cardiac output while avoiding fluid overload is crucial. The intricacies of volume management are guided by continuous assessment of hemodynamic parameters, ensuring that fluid administration aligns with the specific needs of the pediatric patient. This tailored approach helps prevent complications such as pulmonary edema and worsening myocardial function, contributing to the overall stability of the cardiovascular system in the critical post-cardiac arrest phase [ 41 ]. Monitoring Cardiac Function Continuous ECG monitoring: Immediate initiation of continuous ECG monitoring post-resuscitation is a fundamental practice, providing real-time surveillance of cardiac rhythm. This continuous monitoring is essential for promptly detecting any arrhythmias or changes in cardiac rhythm that may arise in the critical post-cardiac arrest phase. The ongoing assessment enables healthcare providers to swiftly intervene, implementing timely adjustments to pharmacological therapies or other interventions to maintain cardiac stability and prevent adverse events [ 42 ]. Echocardiography: Bedside echocardiography emerges as a powerful diagnostic tool, offering real-time visualization of cardiac function. This imaging modality allows healthcare providers to assess myocardial contractility, chamber dimensions, and the presence of any structural abnormalities. The dynamic information echocardiography provides is instrumental in tailoring management strategies based on the individual cardiac status of the pediatric patient post-resuscitation. It facilitates a nuanced understanding of cardiovascular dynamics, guiding interventions to optimize cardiac function and mitigate potential complications [ 43 ]. Hemodynamic monitoring: Continuous hemodynamic monitoring, encompassing measurements such as CVP and arterial blood pressure, provides valuable insights into the cardiovascular status of the pediatric patient. This information is pivotal in guiding interventions to optimize perfusion, prevent complications, and maintain hemodynamic stability. The continuous nature of hemodynamic monitoring ensures that healthcare providers can promptly respond to changes in the patient's cardiovascular status, contributing to a proactive and tailored approach in post-cardiac arrest care [ 44 ]. Biomarkers: Serum biomarkers, including troponin and brain natriuretic peptide (BNP), serve as valuable indicators of myocardial injury and stress. Serial measurements of these biomarkers contribute to the ongoing assessment of cardiac function in the post-resuscitation phase. Elevated levels may signify ongoing cardiac strain or injury, prompting further investigation and guiding therapeutic decision-making. Incorporating biomarkers into the monitoring protocol adds a layer of precision to assessing cardiac status, supporting healthcare providers in optimizing post-resuscitation cardiac care for pediatric patients [ 45 ]. Respiratory support and ventilation Mechanical Ventilation Strategies Initiation of mechanical ventilation: The initiation of mechanical ventilation is a crucial step in post-cardiac arrest care for many pediatric patients, aiming to ensure optimal oxygenation and ventilation. The decision to initiate mechanical ventilation is a nuanced process influenced by various factors. The patient's level of consciousness, respiratory effort, and the underlying cause of cardiac arrest, all contribute to the determination of whether mechanical support is necessary. Swift and accurate assessment of these factors guides healthcare providers in making informed decisions about the timing and necessity of initiating mechanical ventilation to support respiratory function in the post-resuscitation phase [ 46 ]. Ventilator mode selection: The choice of ventilator mode is a personalized decision tailored to each patient's specific needs. Ventilator modes, including volume-controlled ventilation (VCV), pressure-controlled ventilation (PCV), or other hybrid modes, are selected based on a comprehensive evaluation of factors. Lung compliance, airway resistance, and the requirement for synchronized or assist-control ventilation, all contribute to the decision-making process. This tailored approach ensures that the chosen ventilator mode aligns with the patient's respiratory mechanics, optimizing the effectiveness of mechanical support and promoting respiratory stability [ 47 ]. Optimizing ventilator settings: Optimizing ventilator settings becomes a continuous and dynamic process once mechanical ventilation is initiated. Adjustments to parameters such as tidal volume, respiratory rate, and positive end-expiratory pressure (PEEP) are meticulously made to achieve adequate gas exchange while minimizing the risk of ventilator-induced lung injury. Lung-protective strategies, including low tidal volume ventilation, are often employed to prevent further damage to lung tissue. The ongoing assessment and refinement of ventilator settings reflect a commitment to providing tailored respiratory support, ensuring that the ventilatory parameters are attuned to the specific needs of the pediatric patient in the critical post-cardiac arrest period [ 48 ]. Oxygenation and Ventilation Targets Arterial blood gas monitoring: Continuous monitoring of arterial blood gases (ABGs) is a cornerstone in post-cardiac arrest care, providing vital insights into the patient's respiratory and metabolic status. This ongoing assessment guides ventilator setting titration to achieve the desired oxygenation and ventilation targets. ABG analysis informs adjustments to inspired oxygen concentration and ventilator parameters, ensuring the maintenance of appropriate arterial oxygen and carbon dioxide levels. This real-time feedback loop, facilitated by ABG monitoring, enables healthcare providers to tailor respiratory support to the dynamic needs of the pediatric patient, optimizing gas exchange and respiratory function [ 49 ]. Oxygenation targets: Individualized oxygenation targets are established based on a comprehensive assessment of the patient's clinical condition and underlying pathology. Monitoring oxygen saturation (SpO2) becomes instrumental in achieving these targets, guiding adjustments to inspired oxygen concentrations. The delicate balance lies in optimizing oxygen delivery to tissues while avoiding the potential harm of hyperoxia, including oxidative stress. By closely monitoring oxygenation targets, healthcare providers strive to create an environment that supports tissue oxygenation and minimizes the risk of complications related to imbalances in oxygen levels post-cardiac arrest [ 50 ]. Ventilation targets: Ventilation targets revolve around optimizing the elimination of carbon dioxide while preventing excessive ventilation that could lead to respiratory alkalosis. Complementing ABG analysis, end-tidal carbon dioxide (EtCO2) monitoring becomes a valuable tool in assessing ventilation adequacy. This monitoring assists healthcare providers in gauging the patient's ventilatory status and guides adjustments to ventilator settings. The goal is to strike a precise balance, ensuring effective carbon dioxide removal while avoiding the adverse effects of excessive ventilation. By incorporating EtCO2 monitoring into the comprehensive respiratory assessment, healthcare providers can refine ventilator management strategies to meet the specific needs of pediatric patients in the critical post-cardiac arrest phase [ 51 ]. Addressing Respiratory Complications Prevention of ventilator-associated complications: Mitigating the risk of ventilator-associated complications, notably ventilator-associated pneumonia (VAP) and barotrauma, involves implementing a comprehensive set of preventive strategies. Rigorous adherence to proper hand hygiene practices, elevation of the head of the bed to reduce aspiration risk, and meticulous adherence to aseptic techniques during airway management contribute to a multifaceted approach. Collectively, these preventive measures aim to create an environment that minimizes the potential for complications associated with mechanical ventilation, supporting the overall respiratory well-being of the pediatric patient in the post-cardiac arrest phase [ 52 ]. Early recognition of respiratory distress: Vigilant monitoring for signs of respiratory distress assumes paramount importance in post-cardiac arrest care. Early recognition of indicators such as increased breathing work, chest retractions, or persistent hypoxemia allows for timely intervention. Healthcare providers can swiftly respond by adjusting to ventilator settings, administering bronchodilators as needed, or considering advanced respiratory support. This proactive approach to recognizing and addressing respiratory distress optimizes respiratory function and mitigates potential complications, fostering a responsive and patient-centered respiratory care model [ 53 ]. Pulmonary imaging and diagnostic modalities: Pulmonary imaging, including chest X-rays, is instrumental in assessing lung parenchyma and identifying potential complications. Beyond traditional radiographic methods, advanced diagnostic modalities such as lung ultrasound may be employed to evaluate lung aeration and detect pleural or pulmonary pathology. The integration of these diagnostic tools enhances the precision of respiratory assessment, enabling healthcare providers to promptly identify and address pulmonary issues that may impact the post-resuscitation respiratory status of pediatric patients [ 54 ]. Bronchoscopy and airway clearance: In situations where airway obstruction or excessive secretions are suspected, bronchoscopy is a valuable diagnostic and therapeutic tool. Bronchoscopy enables healthcare providers to assess and address airway-related concerns by directly visualizing the airways. Additionally, airway clearance techniques, including chest physiotherapy and mucolytic agents, are employed to maintain airway patency and optimize respiratory function. This proactive approach to airway management contributes to preventing complications and promoting respiratory well-being in pediatric patients post-cardiac arrest [ 55 ]. Hemodynamic management Fluid Resuscitation Early fluid resuscitation: In the immediate aftermath of resuscitation, initiating early fluid resuscitation is a pivotal component of post-cardiac arrest care. The primary objective is to restore intravascular volume and enhance perfusion swiftly. The choice between crystalloid and colloid solutions is influenced by a nuanced consideration of factors such as the patient's hemodynamic status, electrolyte balance, and specific clinical considerations. Early fluid resuscitation sets the foundation for stabilizing cardiovascular function and supporting vital organ perfusion during the critical post-resuscitation phase [ 56 ]. Assessment of fluid responsiveness: The ongoing assessment of fluid responsiveness is a dynamic process that informs continued fluid management. Careful monitoring, including dynamic indicators like changes in pulse pressure or stroke volume in response to fluid administration, is crucial in determining the patient's fluid needs. These indicators provide real-time insights into the patient's cardiovascular responsiveness to fluid, allowing healthcare providers to tailor fluid administration to the individual requirements of the pediatric patient post-cardiac arrest [ 57 ]. Caution in fluid administration: While fluid resuscitation is imperative, a judicious and cautious approach is essential to prevent potential complications. These complications may include fluid overload, pulmonary edema, and impaired cardiac function. Regular reassessment of the patient's hemodynamic status becomes paramount, allowing healthcare providers to adjust fluid administration based on the evolving clinical picture. This vigilant and adaptive approach maintains optimal fluid balance, ensuring that the benefits of fluid resuscitation are realized without exposing the patient to unnecessary risks [ 58 ]. Inotropic and Vasopressor Support Indications for inotropic support: In the post-cardiac arrest phase, indications for inotropic support arise when there is evidence of inadequate cardiac output or myocardial dysfunction. Inotropic agents, such as dobutamine, enhance myocardial contractility, thereby improving the heart's pumping efficiency. By augmenting cardiac output, these agents contribute to the overall goal of supporting systemic perfusion, ensuring that vital organs receive an adequate supply of oxygenated blood. The decision to initiate inotropic support is informed by a comprehensive assessment of the patient's cardiovascular function and may be a crucial intervention in optimizing post-resuscitation hemodynamics [ 59 ]. Vasopressor support for hemodynamic stability: Vasopressors, including epinephrine and norepinephrine, are critical in maintaining hemodynamic stability post-resuscitation. Indicated in hypotension or inadequate perfusion, these medications exert their effects by increasing systemic vascular resistance and elevating blood pressure. By enhancing vascular tone, vasopressors ensure adequate perfusion to vital organs, mitigating the risk of systemic hypoperfusion. Vasopressor support is a targeted intervention to address specific hemodynamic challenges and stabilize the patient's overall cardiovascular status [ 60 ]. Titration of medications: Inotropic and vasopressor medications are a nuanced and individualized process based on the patient's response and ongoing hemodynamic assessment. Continuous monitoring of blood pressure, heart rate, and other relevant parameters forms the basis for the titration process. Healthcare providers carefully adjust medication dosages to achieve the desired hemodynamic goals while minimizing the risk of adverse effects. This dynamic and responsive approach ensures that inotropic and vasopressor support is tailored to the unique needs of the pediatric patient in the post-cardiac arrest phase, optimizing cardiovascular function and promoting hemodynamic stability [ 60 ]. Goal-Directed Therapy Definition and objectives: Goal-directed therapy is a strategic approach in post-cardiac arrest care that involves tailoring interventions to achieve specific hemodynamic goals. These goals typically encompass maintaining a target mean arterial pressure (MAP), ensuring adequate cardiac output, and optimizing oxygen delivery to vital tissues. The overarching objectives of goal-directed therapy are to enhance organ perfusion and prevent complications associated with inadequate perfusion. By setting and pursuing these specific hemodynamic targets, healthcare providers aim to support the recovery of vital organ systems and improve overall patient outcomes [ 61 ]. Hemodynamic monitoring: Continuous hemodynamic monitoring is central to the success of goal-directed therapy. Invasive monitoring methods, including arterial blood pressure monitoring and CVP measurement, provide real-time data that guide the implementation of goal-directed interventions. Non-invasive monitoring techniques, such as serial clinical assessments and echocardiography, complement invasive measures, offering a comprehensive perspective on the patient's cardiovascular status. This multi-faceted approach to hemodynamic monitoring ensures that healthcare providers have a dynamic and detailed understanding of the pediatric patient's hemodynamic profile, facilitating precise and targeted goal-directed therapy [ 62 ]. Individualized targets: Goal-directed therapy recognizes the importance of individualization in setting hemodynamic targets. The specific goals are tailored based on the patient's age, underlying medical conditions, and the nature of the cardiac arrest. This individualized approach acknowledges each pediatric patient's unique physiological characteristics and needs, allowing targeted therapy to address specific challenges. By aligning hemodynamic goals with the patient's clinical context, goal-directed therapy becomes a personalized strategy to prevent end-organ dysfunction and optimize tissue perfusion [ 63 ]. Adaptability to changing conditions: A defining characteristic of goal-directed therapy is its adaptability to changing clinical conditions. The dynamic nature of post-resuscitation care necessitates regular reassessment and adjustment of therapeutic interventions. This ensures that hemodynamic management remains responsive to the evolving needs of the pediatric patient. Adapting real-time interventions enhances goal-directed therapy's efficacy, promoting optimal cardiovascular function and resilience in the face of the complex challenges posed by post-cardiac arrest physiology [ 64 ]. Post-resuscitation monitoring Continuous Monitoring Parameters Vital signs: Continuous monitoring of vital signs, including heart rate, blood pressure, respiratory rate, and oxygen saturation, is a cornerstone in post-cardiac arrest care. This real-time assessment offers critical insights into the patient's physiological stability. Monitoring trends in these parameters enables healthcare providers to promptly identify and respond to changes, guiding adjustments to therapeutic interventions. Vital sign monitoring is instrumental in gauging the patient's response to treatment, detecting potential complications, and ensuring ongoing physiological homeostasis [ 65 ]. ECG: Continuous ECG monitoring is indispensable for detecting a spectrum of cardiac abnormalities in the post-cardiac arrest phase. This includes arrhythmias, conduction abnormalities, and myocardial ischemia or injury signs. The continuous surveillance of the ECG ensures the timely recognition of cardiac rhythm disturbances, allowing healthcare providers to intervene promptly. This vigilance is crucial in post-resuscitation care, where cardiac stability is paramount. ECG monitoring provides a continuous window into the patient's cardiac activity, guiding therapeutic decisions and contributing to the overall cardiovascular assessment [ 66 ]. Continuous pulse oximetry: Monitoring oxygen saturation through continuous pulse oximetry is vital for assessing respiratory function in post-cardiac arrest patients. This non-invasive method offers real-time feedback on the patient's oxygenation status. Continuous pulse oximetry is particularly important in guiding adjustments to ventilator settings or oxygen supplementation, ensuring oxygen delivery meets the patient's evolving respiratory needs. This ongoing assessment supports optimizing respiratory parameters, contributing to overall respiratory well-being post-resuscitation [ 67 ]. EtCO2: Continuous monitoring of EtCO2 levels provides valuable information about the adequacy of ventilation and perfusion. EtCO2 levels can signal respiratory or circulatory function alterations, prompting further investigation and intervention. In the post-cardiac arrest phase, EtCO2 monitoring is particularly relevant for assessing ventilation effectiveness, guiding adjustments to ventilator settings, and contributing to the early detection of potential respiratory compromise. This parameter is a critical component of the comprehensive respiratory assessment in post-resuscitation care [ 68 ]. Invasive hemodynamic monitoring: In cases where close hemodynamic assessment is warranted, invasive monitoring through arterial lines and central venous catheters offers continuous measurement of blood pressure, CVP, and other hemodynamic parameters. This direct and real-time data is invaluable for tailoring interventions to patients' cardiovascular needs. Invasive hemodynamic monitoring provides a more detailed picture of the patient's circulatory status, optimizing fluid management, vasopressor support, and overall hemodynamic stability in the critical post-cardiac arrest period [ 62 ]. Biomarkers for Prognostication Troponin: Serum troponin levels play a crucial role as biomarkers in assessing myocardial injury post-resuscitation. Troponin is specific to cardiac muscle, and elevated blood levels indicate damage to the heart muscle. Monitoring troponin levels provides valuable insights into the extent of cardiac involvement and ongoing stress on the myocardium. Elevated troponin levels may prompt interventions to optimize cardiac function, guiding the management of post-cardiac arrest patients to prevent further cardiac complications [ 69 ]. BNP: BNP levels indicate cardiac strain, particularly in assessing myocardial dysfunction. As the heart experiences increased pressure or volume overload, BNP is released. Serial measurements of BNP assist in monitoring the response to treatment and provide valuable information for predicting outcomes. Monitoring BNP levels in the post-resuscitation phase helps healthcare providers assess the cardiac component of post-cardiac arrest syndrome, guiding interventions to mitigate cardiac stress and optimize overall cardiovascular function [ 70 ]. S100B protein: S100B is a biomarker associated with neurological injury. Elevated levels of S100B in the bloodstream may suggest ongoing brain damage. Monitoring S100B levels aids in the early identification of neurological complications, allowing for timely intervention and management. This biomarker provides valuable information about the extent of neurological injury post-resuscitation, guiding healthcare providers in tailoring neuroprotective strategies and monitoring the neurological well-being of the pediatric patient [ 71 ]. Lactate: Lactate is a critical biomarker that reflects tissue perfusion and metabolic stress. Persistent elevation of lactate levels may indicate inadequate tissue perfusion, suggesting ongoing challenges in meeting metabolic demands. Regular monitoring of lactate levels is instrumental in guiding interventions to improve organ perfusion. Elevated lactate levels prompt healthcare providers to assess and address factors contributing to impaired tissue perfusion. This ensures timely interventions to optimize hemodynamic stability and prevent complications related to inadequate oxygen delivery to tissues [ 72 ]. Imaging Studies in the Post-Resuscitation Phase Chest X-rays: Post-resuscitation chest X-rays are pivotal in assessing pulmonary status and guiding respiratory management. These imaging studies provide valuable information about lung aeration, the correct placement of endotracheal tubes, and the presence of any pulmonary pathology. Changes observed in lung fields on chest X-rays may prompt adjustments to ventilator settings, ensuring optimal respiratory support. This diagnostic tool is instrumental in monitoring lung function and addressing potential complications related to respiratory dynamics in the post-cardiac arrest phase [ 73 ]. Echocardiography: Bedside echocardiography is a dynamic and real-time imaging tool employed to assess cardiac function post-cardiac arrest. This modality allows for the evaluation of myocardial contractility, chamber dimensions, and the presence of structural abnormalities. The information obtained from echocardiography guides ongoing hemodynamic management, providing critical insights into the cardiovascular status of the pediatric patient. By visualizing the heart in action, healthcare providers can tailor interventions to optimize cardiac function, contributing to overall hemodynamic stability [ 74 ]. Brain imaging: Neuroimaging studies, such as CT or MRI, may be considered in cases where neurological complications are suspected. These studies are crucial in identifying intracranial pathology, including potential signs of brain injury or other neurological issues. The results of brain imaging studies guide healthcare providers in making informed decisions about managing neurological complications, enabling targeted interventions to optimize brain health in the post-resuscitation phase [ 75 ]. Abdominal imaging: In selected cases, abdominal imaging studies, such as ultrasound or CT scans, may assess organ perfusion, detect potential abdominal complications, or guide interventions related to the underlying cause of cardiac arrest. These imaging modalities offer valuable information about the condition of abdominal organs, aiding healthcare providers in understanding and addressing potential complications that may impact overall patient stability. Abdominal imaging studies contribute to a comprehensive assessment, allowing for tailored interventions based on the individual patient's specific needs [ 76 ]. Long-term outcomes and rehabilitation Functional Outcomes in Survivors Neurological function: Assessing neurological function is critical in understanding the long-term outcomes of pediatric patients who have experienced cardiac arrest. Survivors may exhibit a spectrum of neurological sequelae, ranging from minimal impairment to significant deficits. Functional assessments, such as the GOS and the Pediatric Cerebral Performance Category (PCPC), play a crucial role in characterizing the extent of neurological recovery. The GOS evaluates overall outcomes, while the PCPC specifically focuses on neurological performance. These assessments provide healthcare providers with a comprehensive understanding of the impact of cardiac arrest on neurological function and guide interventions to support optimal recovery [ 77 ]. Motor function: Motor function is crucial to functional outcomes in post-cardiac arrest survivors. Physical therapy interventions optimize motor skills, mobility, and coordination. The assessment of motor function involves evaluating gross and fine motor abilities, adaptive behaviors, and activities of daily living. By systematically assessing motor function, healthcare providers can tailor rehabilitation strategies to address specific motor challenges and promote the highest level of independence and functionality for pediatric patients in the post-resuscitation phase [ 78 ]. Quality of life measures: Long-term outcomes extend beyond immediate medical considerations to encompass the overall quality of life for pediatric cardiac arrest survivors. Patient-reported outcome measures (PROMs) and assessments of health-related quality of life (HRQoL) provide valuable insights into recovery's psychosocial and emotional aspects. These measures capture survivors' subjective experiences and perspectives, allowing healthcare providers to understand the impact of cardiac arrest on various aspects of daily life. Assessing quality of life helps guide interventions that address medical needs and support the broader well-being and satisfaction of pediatric patients and their families throughout the recovery process [ 79 ]. Cognitive and Developmental Considerations Cognitive function: Pediatric cardiac arrest survivors may encounter challenges related to cognitive function, emphasizing the importance of thorough neuropsychological assessments. These assessments help identify deficits in specific cognitive domains such as attention, memory, executive function, and academic performance. Understanding the nuanced cognitive profile of each survivor allows healthcare providers to tailor interventions, including cognitive rehabilitation programs. These programs aim to mitigate cognitive impairments, support educational attainment, and facilitate a smoother transition back into academic settings. By addressing cognitive challenges, healthcare providers contribute to pediatric patients' overall well-being and potential future success in their cognitive development [ 80 ]. Developmental milestones: Given the age of pediatric patients, monitoring developmental milestones is crucial for assessing long-term outcomes. Developmental assessments track progress in language acquisition, socialization, and adaptive skills. Early identification of developmental delays is essential for implementing timely interventions, which may include speech therapy, occupational therapy, and educational support. By addressing developmental needs, healthcare providers contribute to the holistic development of pediatric patients, aiming to enhance their overall functioning and independence as they progress through childhood and adolescence [ 81 ]. Psychosocial well-being: The psychosocial impact of cardiac arrest and its aftermath should not be underestimated. Mental health assessments and counseling services are crucial in supporting pediatric patients and their families through the emotional challenges of recovery. Assessments of psychosocial well-being help identify any emotional or psychological distress, allowing for targeted interventions. Counseling services provide a supportive space for patients and their families to navigate the emotional complexities of the post-resuscitation period. By addressing psychosocial needs, healthcare providers contribute to pediatric patients' mental and emotional resilience, fostering a positive trajectory in their overall recovery journey [ 82 ]. Rehabilitation Strategies Physical therapy: Physical therapy is a cornerstone of rehabilitation, focusing on enhancing motor function, strength, and coordination in pediatric cardiac arrest survivors. Individualized rehabilitation plans are tailored to each patient's needs, incorporating various exercises, mobility training, and activities to improve overall physical well-being. Physical therapists work collaboratively with patients to address motor challenges, optimize mobility, and promote physical independence. Through targeted interventions, physical therapy contributes to the restoration of motor skills and the overall functional capacity of pediatric patients, fostering their ability to engage in daily activities [ 83 ]. Occupational therapy: Occupational therapy is crucial in addressing daily living skills, fine motor coordination, and adaptive behaviors in pediatric cardiac arrest survivors. Interventions provided by occupational therapists focus on enhancing independence in self-care, school-related tasks, and age-appropriate activities. By targeting specific areas of functional impairment, occupational therapy aims to improve overall occupational performance and quality of life. Incorporating adaptive strategies and skill-building activities supports pediatric patients in achieving greater autonomy and participation in meaningful daily activities [ 84 ]. Speech and language therapy: Speech and language therapy is essential for pediatric patients facing communication challenges post-cardiac arrest. This form of therapy encompasses interventions to improve articulation, language comprehension, and social communication skills. Speech and language therapists work with patients to address speech impediments, language deficits, and communication difficulties. The goal is to enhance effective communication, facilitate social interactions, and support academic and social success for pediatric patients on their journey to recovery [ 85 ]. Neuropsychological rehabilitation: Neuropsychological rehabilitation addresses cognitive deficits in pediatric cardiac arrest survivors. This form of rehabilitation provides strategies to enhance attention, memory, and executive function. Educational support and accommodations are often integral to neuropsychological rehabilitation, ensuring pediatric patients receive the necessary resources to navigate academic challenges. By tailoring interventions to the specific cognitive needs of each patient, neuropsychological rehabilitation supports the development of cognitive skills. It contributes to the overall cognitive well-being of pediatric cardiac arrest survivors [ 86 ]. Family-centered care: In the pediatric population, family-centered care is paramount in the rehabilitation process for cardiac arrest survivors. This approach emphasizes families' involvement in their child's care and recovery. Healthcare providers collaborate closely with families, recognizing them as essential partners in decision-making and goal-setting. By fostering open communication and involving families in the rehabilitation plan, family-centered care creates a supportive and collaborative environment that enhances pediatric cardiac arrest survivors' overall well-being and recovery [ 87 ]. School reintegration: Successful school reintegration is a crucial aspect of the rehabilitation process for pediatric cardiac arrest survivors. Collaboration with educational professionals is essential to develop individualized education plans (IEPs) and accommodations that support academic progress while addressing any challenges arising from cognitive or physical impairments. By working closely with schools, healthcare providers ensure a seamless transition back into the educational environment, providing the necessary support for the child to thrive academically and socially [ 88 ]. Community-based interventions: Engaging with community resources and support networks is instrumental in promoting the overall well-being of pediatric cardiac arrest survivors. Community-based interventions extend beyond clinical settings and may include recreational therapy, socialization programs, and peer support. These interventions enhance the child's social integration, encourage participation in community activities, and provide a holistic approach to rehabilitation. By tapping into community resources, healthcare providers contribute to the broader support network that plays a vital role in the ongoing recovery and quality of life for pediatric patients and their families [ 89 ].
CC BY
no
2024-01-16 23:43:49
Cureus.; 15(12):e50565
oa_package/f5/52/PMC10788704.tar.gz
PMC10788705
37327613
Introduction Artificial intelligence (AI) techniques have the potential to be used as decision support tools in medicine, for example in applications such as diagnosing disease or predicting response to treatment. However, in recent years even though AI models have dominated medical research they are often developed without consideration of how the models will be used in clinical practice. Specifically, the lack of trust in automated predictions for clinical applications is a major barrier preventing clinical adoption ( Linardatos et al., 2021 ). One way to provide a level of trust in AI predictions is to estimate uncertainty or classification confidence and provide end-users with the confidence score as well as the prediction. Ideally, models used in a decision support setting should avoid confident wrong predictions and maximise the confidence of correct predictions. The concept of model calibration refers to the relationship between the accuracy of predictions and their confidence: a well-calibrated model will be less confident when making wrong predictions and more confident when making correct predictions. With this in mind measures of model calibration have been proposed that provide a more complete understanding of the performance of a predictive model by estimating how closely the predictive confidence matches its accuracy ( Nixon et al., 2019 ). However, relatively little attention has been paid to how to optimise AI models with respect to model calibration. If training could be performed in such a way as to maximise both accuracy and calibration this would have the potential to provide a level of trust and reliability in model outputs ( Gawlikowski et al., 2021 , Sensoy et al., 2021 ). In this paper we investigate training schemes that aim to improve model calibration as well as accuracy, with a specific focus on deep learning (DL) models. These schemes are collectively known as uncertainty-aware training methods. We utilise recent advances in uncertainty estimation and uncertainty-aware training to investigate multiple methodologies to identify the best performing strategy with respect to accuracy and calibration. Specifically, we investigate two applications from cardiology: prediction of response to cardiac resynchronisation therapy (CRT) from pre-treatment cardiac magnetic resonance (CMR) images, and diagnosis of coronary artery disease (CAD), again from CMR images. In this introduction, we first focus on techniques utilised to estimate uncertainty, followed by a discussion on model calibration and then we move on to methods to develop uncertainty-aware AI models. Finally, we provide an overview of the contributions of our research. Uncertainty estimation Two commonly identified sources of uncertainty are aleatoric uncertainty, which is caused by noisy data inputs and epistemic uncertainty, which is the uncertainty inherent in the model itself ( Hüllermeier and Waegeman, 2021 ). Aleatoric uncertainty is irreducible as the ‘noise’ present in the input data cannot be altered. Epistemic uncertainty, however, may be improved by providing more knowledge through larger and more varied datasets ( Abdar et al., 2021 , Gawlikowski et al., 2021 ). Epistemic and aleatoric uncertainty estimates for task DL models have predominantly been made using Bayesian approximation, ensemble methods and test-time augmentation ( Abdar et al., 2021 , Gawlikowski et al., 2021 ). Bayesian DL aims to model a distribution over the model’s weights and is a favoured method for uncertainty estimation, as the modelling of an approximated posterior distribution provides the ability to produce more representative epistemic uncertainty estimations ( Abdar et al., 2021 ). However, approximation methods are required to compute the estimates requiring more computational effort for both training and inference ( Gawlikowski et al., 2021 , Alizadehsani et al., 2021 ). Ensemble methods seek to train multiple models, each with different parameters, which are then used to generate multiple predictions from which the variance in predicted classes can be considered a measure of the epistemic uncertainty ( Gawlikowski et al., 2021 ). For example, Mehrtash et al. (2020) demonstrated the use of ensembles to quantify a model’s predictive uncertainty for medical image segmentation, using MR images of the brain, heart and prostate. Aleatoric uncertainty is often estimated by augmenting test data to generate multiple test samples and measuring the variance in the predictions whilst keeping the model architecture intact ( Shorten and Khoshgoftaar, 2019 ). An example of this type of approach is Wang et al. (2018) , who investigated test-time uncertainty estimation to improve automatic brain tumour segmentation tasks using random flipping and rotation, later expanding their research to epistemic uncertainty ( Wang et al., 2019 ). Currently these approaches have predominantly been applied to medical image segmentation applications and less so for classification applications such as predicting diagnosis or treatment response ( Abdar et al., 2021 , Gawlikowski et al., 2021 ). Therefore, actively researching and improving uncertainty estimation techniques to identify more calibrated and easily scaleable estimates will aid the development of trustworthy decision support tools for clinicians ( Gawlikowski et al., 2021 ). Model calibration Quantifying uncertainty of DL models has highlighted underlying problems of DL architectures. In particular, the Softmax probability function, often used as the final layer of a DL classification model has been shown to provide over-confident predictions for both in and out of distribution data ( Kompa et al., 2021 , Gawlikowski et al., 2021 ). Additionally, the hard label binary classification approach has been shown to have a negative impact by overestimating confidence in predictions, indicating that a softer approach may provide a more reliable method mimicking real world behaviour ( Thulasidasan et al., 2019 ). Guo et al. (2017) highlighted that while developments have been made to produce a variety of architectures and uncertainty estimations for DL models, evaluating the calibration of models is necessary to understand and interpret probability estimates. To this end, Guo et al. (2017) proposed the Expected Calibration Error (ECE) metric, which partitions or bins confidences and utilises the accuracy and confidence estimates over all sets of samples in all bins to provide a measure of model calibration. Interestingly, Nixon et al. (2019) investigated the shortfalls of the ECE, noting that the choice of the number of bins has the potential to skew results. This influence is noticeable when visualised on an illustrative representation of the ECE referred to as a reliability diagram. Responding to this weakness, an alternate measure called the Adaptive ECE (AECE) has been suggested based on an adaptive binning strategy ( Ding et al., 2020a ). The authors argue that AECE provides a robust approach to handle non-uniform confidence calibration and enables enhanced visual illustrations in reliability diagrams. Often calibration errors are also evaluated with the Maximum Calibration Error (MCE), which quantifies the largest deviation across the confidence bins ( Guo et al., 2017 ). Overconfidence Error (OE) is an additional calibration performance metric which penalises predictions by the weight of the confidence but only when confidence exceeds accuracy ( Thulasidasan et al., 2019 ). OE has been proposed as an appropriate calibration metric for high risk applications such as healthcare where it is important to avoid confident wrong predictions ( Thulasidasan et al., 2019 ). Alternate metrics such as the Brier Score (BS) have been utilised in the literature and considered as a proper scoring rule, computed using uncertainty, resolution and reliability. However, the measure has the potential to under-penalise predictions with lower probabilities. Despite the range of metrics presented, studies continue to investigate alternate, standardised and improved methods to understand and evaluate the calibration of DL models ( Ashukha et al., 2020 , Ovadia et al., 2019 ). To date, most of this research has focused on computer vision problems, and little work has evaluated the utility of these measures on real-world medical applications. Uncertainty-aware training Uncertainty-aware training refers to methods that incorporate uncertainty information into the training of a DL model with the aim of improving its model calibration. Yu et al. (2019) provide an example of this type of approach, demonstrating how a DL model can learn to gradually exploit uncertainty information to provide more reliable predictions for a 3D left atrium segmentation application. Alternate approaches aim to directly target confident cases based on an acceptable user-risk level, for example the ‘selective’ image classification method proposed in Geifman and El-Yaniv (2017) . Uncertainty estimates have been directly incorporated into the loss function of the model, as proposed by Ding et al. (2020b) for a segmentation task. The outcomes demonstrated the ability to maximise performance on confident outputs and reduce overconfident wrong predictions. In our previous work ( Dawood et al., 2021 ), we used a similar approach to Ding et al. (2020b) and proposed, for the first time, an uncertainty-aware DL model for CRT response prediction, as a preliminary investigation to evaluate changes in predictive confidence. We used confidence bands estimated at test time to highlight an improvement in the confidence of correct predictions and a reduction in confidence of incorrect predictions. Another group of methods has attempted to define differentiable loss terms that directly quantify model calibration ( Krishnan and Tickoo, 2020 , Karandikar et al., 2021 ). Alternately, building on recent work on Evidential DL ( Sensoy et al., 2018 , Sensoy et al., 2020 ), a Bayesian methodology was incorporated into Evidential DL by Sensoy et al. (2021) . They utilised probability distributions to obtain uncertainty in predictions for each category/class and introduced new methods to handle the risk associated with incorrect predictions. The continued research and improvements within the field therefore highlight the need to incorporate uncertainty estimation when training DL models as it will likely become a vital component in high-risk applications such as diagnostic predictions in healthcare ( Gawlikowski et al., 2021 ). Contributions In this paper, we seek to perform a thorough investigation of uncertainty-aware DL training methods and evaluate them on two real-world clinical applications. Our contributions are: 1. We propose three novel uncertainty-aware training strategies (including the one proposed in our preliminary work ( Dawood et al., 2021 )), and compare them to two state-of-the-art methods from the literature. 2. We evaluate all models on two realistic medical imaging applications: CRT response prediction and CAD diagnosis, both from CMR images. 3. We use a wide range of calibration performance measures proposed in the literature, combined with a reliability diagram based on adaptive binning to understand the effects of different uncertainty-aware training methods. 4. We further quantify the performance of all models in terms of aleatoric and epistemic uncertainty. 5. We evaluate the impact of using a calibration-based model selection criterion on accuracy and calibration performance. The paper is structured as follows. In Section 2 we describe our uncertainty-aware strategies and the comparative approaches. In Section 4 we present all experiments performed to evaluate and compare the different approaches with all results found in Section 5 . Section 6 then discusses the findings, evaluates the outcomes and recommends future work towards cultivating trustworthy and calibrated predictive DL classification models.
Methods In this Section we introduce the different uncertainty-aware and comparative strategies used and evaluated in the paper. Notation Before presenting our novel and comparative approaches for uncertainty-aware training, we first define a common notation that will be used in the subsequent descriptions. Throughout, represents the probability of an event, A B represents the intersection of set A and set B , A B represents their union, denotes the cardinal of the set A and its complement. Hyperparameters of the network are denoted by Greek lowercase letters: is trainable whereas , and are hyperparameters. We denote by B the set of samples in a training batch. For a binary classifier with trainable parameters , we define: • the samples labelled as ground truth positive ( ) • the samples classified by the model as positive ( ) • the samples correctly classified ( ) • the samples with an “uncertain” classification (based on their classification confidence) ( ) • ] (for a sample from ) • is the confidence (probability) of the model-predicted class for sample , i.e. • is the ground truth label of sample (i.e. 1 for positive and 0 for negative) Baseline model The diagram in Fig. 1 illustrates the architecture of the baseline classification model developed by Puyol-Antón et al., 2020 , Dawood et al., 2021 , which was used as the framework to perform the experiments. The baseline model utilises CMR short axis (SA) image segmentations produced by a pre-trained U-net ( Chen et al., 2020 ). These segmentations are used as input into a variational autoencoder (VAE), which during the training phase is tasked with reconstructing the segmentations frame-by-frame from the learned latent representations. Subsequently, a classifier is trained to make predictions from the concatenated VAE latent spaces of the time series of CMR SA segmentations. The points at which aleatoric uncertainty and epistemic uncertainty are estimated (see Section 5.3 ) are shown in the dotted blocks in Fig. 1 . In the literature, quantifying aleatoric uncertainty has often been performed using data augmentation at test time ( Ayhan and Berens, 2018 ). In our work, we produced realistic augmentations for this purpose by using the U-net segmentation model to generate multiple segmentations which were inputted into the VAE/classifier to estimate aleatoric uncertainty. To quantify epistemic uncertainty we drew multiple samples from the learned VAE latent space. Formally, we define the loss function of the baseline model as comprising three terms: In the above equation, is the cross entropy between the input segmentations to the VAE and the output reconstructions, is the Kullback–Leibler (KL) divergence between the latent variables and a unit Gaussian, represents the binary cross entropy loss for the classification task and , are used to weight the level of influence each term has to the total loss. Novel approaches We now propose three novel approaches to uncertainty-aware training, each based on modifying the baseline model with the aim of improving model calibration. Two weighted loss terms are introduced in Sections 2.3.1 , 2.3.2 respectively. For these a new loss term and hyperparameter are added to the baseline Eq. (1) to define the general form below: Furthermore, in Section 2.3.3 we develop a confidence-based weighting scheme applied to the classifier loss . Paired Confidence Loss We introduced this method in our preliminary work in Dawood et al. (2021) . This approach was inspired by the work of Ding et al. (2020b) , who proposed an uncertainty-aware training method for image segmentation that focused the loss function on more confident outputs. We adapted this approach to image-based classification problems. The final loss function implemented is: Here, the first sum is over samples ( ) classified as positive ( ) or negative ( ) in a batch, and in the second sum the are pairs of false positive (or negative) and true positive (or negative) samples. Intuitively, Eq. (3) will evaluate all pairs of correct/incorrect positive/negative predictions in a training batch, and the terms will be positive when the incorrect prediction ( ) has higher confidence than the correct one ( ). If the correct one has higher confidence than the incorrect one by a margin of the hyperparameter or more the terms will be zero. Note that in the max term of Eq. (3) , the probability of a correct prediction is subtracted from the probability of an incorrect prediction. Probability Loss In our second novel method, we again adapted the baseline model loss function to more heavily penalise incorrect predictions with high confidence. We note that the standard cross entropy loss already penalises such cases. However, it is well-known that models trained with cross-entropy loss are prone to poor calibration ( Guo et al., 2017 ), and this motivated the formulation of the Probability Loss approach: As before, the developed loss term is added into the loss function of the model to follow the form in Eq. (2) . The Probability Loss function differs from the approach described in Section 2.3.1 as the terms represent the class probabilities of the classifier (after the Softmax layer) for positive and negative ground truth samples. Intuitively, this loss term penalises ground truth positive (negative) samples with high confidence in negative (positive) prediction. The terms are normalised by the number of samples for the positive and negative classes in the training batch. Confidence Weight An alternative solution to defining a new loss term is to add a weighting term to the existing classifier loss to penalise training samples with highly confident incorrect predictions. The weighting term is determined by first estimating the epistemic uncertainty of each prediction in the batch by sampling in the latent space of the VAE. Specifically, we randomly sampled 20 points from the VAE latent space and computed predictions for each one. The prediction confidence was calculated as the proportion of positive predictions from these samples and we denote this by for sample . See Section 5.3 for further details of the epistemic uncertainty estimation. The weighting term for each sample in the batch was computed as follows: Here, denotes scalar multiplication. Intuitively, these weights will be high when making a confident wrong prediction, thus encouraging the model training to focus on minimising such cases. was then scaled to produce to ensure that the weights would not drop below a pre-defined value w : Note that w is a hyperparameter that is optimised during the training of the classifier. Comparative approaches We now present three existing methods proposed in the recent literature as our comparative uncertainty-aware training strategies. Accuracy versus Uncertainty Loss Recent work by Krishnan and Tickoo (2020) utilised the relationship between accuracy and uncertainty to develop a loss function aimed at improving model calibration. A differentiable Accuracy versus Uncertainty (AvUC) loss function was developed by placing each prediction into one of four categories; accurate and certain, accurate and uncertain, inaccurate and certain and lastly inaccurate and uncertain. Utilising these four categories, a differentiable loss term was defined as follows: Similar to the methods proposed in Sections 2.3.1 , 2.3.2 , the final loss function follows the same structure as the baseline model and the AvUC loss is added to the total loss, weighted by the hyperparameter as presented in Eq. (2) . Soft ECE loss function Karandikar et al. (2021) extended research performed by Krishnan and Tickoo (2020) , leveraging on the approach of a differentiable loss function to improve calibration. However, here they investigated the ECE measure as a differentiable loss. To implement the loss they introduced a soft binning function scaled with a soft binning temperature , (see Section 4.3 ). Below we define this loss function using our notation, we refer the reader to Karandikar et al. (2021) for a full explanation using the original notation. Here, we use to denote the set of samples that fall into confidence bin , represents the number of bins, and and represent the confidence bins and are related using the soft binning membership function described in Krishnan and Tickoo (2020) . represents the average accuracy within bin (i.e. the proportion of that are correctly classified) and represents the average confidence within bin (i.e. the average of within ). The term is the order of the soft binning function. Maximum Mean Calibration Error loss function Kumar et al. (2018) utilised a reproducing kernel Hilbert space (RKHS) approach with a differentiable loss function to improve calibration, which they termed the Maximum Mean Calibration Error (MMCE). Here, represents the Hilbert space kernel and all other terms are as defined in Section 2.1 . Further detailed derivations and explanations using the original notation are provided in Kumar et al. (2018) .
Experimental results Evaluation metrics In our work we present a number of performance metrics to evaluate our uncertainty-aware strategies. First, we utilise the conventional classification performance measures: sensitivity, specificity and BACC ( Carrington et al., 2021 ). Second, we include the ECE value ( Guo et al., 2017 ) as a measure of model calibration. The confidence used when calculating ECE was the predicted probability after the Softmax layer. A set number of confidence bins was chosen and the average accuracy achieved by the model for all samples that fall into each confidence bin was computed. We then calculate the ECE as follows: In Eq. (10) , the confidences are grouped into bins, is the set of samples whose predictions fall into bin , and is the total number of samples in all the bins, with corresponding accuracies ( ) and confidences ( ) ( Guo et al., 2017 ). Our next calibration measure is the Overconfidence Error (OE), which aims to quantify confident wrong predictions and is computed as follows: Once again the Softmax confidences are grouped into bins, is the set of samples whose predictions fall into bin , and is the total number of samples in all the bins, with corresponding accuracies ( ) and confidences ( ) ( Thulasidasan et al., 2019 ). Our third calibration measure is the Maximum Calibration Error (MCE), which is based on the ECE equation but finds the maximum calibration error across the bins ( Guo et al., 2017 ). Our final calibration measure is the Brier Score (BS), which is a cost function that evaluates the accuracy of probabilistic predictions, using the prediction probability from the Softmax layer, as presented in Eq. (12) . Here, represents the number of samples, the probability and represents the ground truth one-hot encoded vector. A low BS indicates a well calibrated model. Evaluation of uncertainty-aware models Accuracy The performances in terms of classification accuracy of each uncertainty-aware model on both the CRT response prediction and CAD diagnosis tasks are presented in Table 3 . Analysing the results we can see that the Confidence Weight term produced the highest test BACC for both tasks. McNemar’s non-parametric test was used to test if the baseline classifier versus each of the uncertainty-aware classifiers had statistically significantly different classification performances at a significance level of 0.05. For the CAD model there was a significant difference across all strategies, indicated with asterisks, but for the CRT response models (which had a smaller test set) the tests indicated no statistically significant differences. Calibration All calibration measures computed are presented in Table 4 for the CRT response prediction and CAD diagnosis models respectively, with all best performing metrics indicated in bold. The experiments were run three times with different random weight initialisations and the mean and standard deviation of all metrics are shown. For all metrics, a lower score implies a better calibrated model. The most widely used calibration metric in the literature has been the ECE. The results indicate that the Confidence Weight term reduced the ECE measure the most on both the CRT and CAD predictive models. However, this conclusion is not as clear when considering the other calibration metrics, with all tested models (including the baseline) performing best according to at least one metric for one experiment. However, we note that the results for the CRT experiment might be less reliable due to the smaller test set size. To visualise the calibration performance of the different models, we present reliability diagrams in Fig. 2 , Fig. 3 for our larger cohort of CAD subjects using AECE. The reliability diagram plots accuracy against confidence and a perfectly calibrated model would have a line close to identity. We can see that the Confidence Weight model ( Fig. 3 d) shows the most improvement across the confidence bands, however improvement is still lacking in the high confidence bands. Uncertainty quantification To further understand the effect of uncertainty-aware training on model calibration we now estimate the aleatoric and epistemic uncertainties of our different models. The specific points at which uncertainty was estimated are illustrated in Fig. 1 . To estimate the aleatoric uncertainty we generated multiple plausible segmentation inputs to the VAE using inference-time dropout in the segmentation model with probability=0.2, similar to Dawood et al. (2021) . Aleatoric uncertainty was then estimated using the prediction of the original data’s segmentations and those from 19 additional segmentation sets generated in this way, i.e. the original and 19 additional segmentations were propagated through the VAE and classifier. We note that using dropout in the segmentation model approximates the epistemic uncertainty of the segmentation model. However, the multiple segmentations generated using this approximation are passed as inputs into the VAE and classification model, in this way they can be used to approximate the aleatoric uncertainty of the VAE/classifier. The epistemic uncertainty of the baseline and uncertainty-aware model was estimated using random sampling in the latent space of the VAE. Again, the original embedding together with 19 additional random samples were used for estimating epistemic uncertainty. Increasing the number of samples from the latent space did not have a statistically significant difference on the estimate but did adversely affect execution time, therefore just 20 samples were used for epistemic uncertainty estimation. For both types of uncertainty, the outputs of the Softmax layer were used to compute prediction confidence/uncertainty as a percentage of positive predictions out of the 20 samples. The values for all metrics for the epistemic and aleatoric uncertainty for both CRT response and CAD diagnosis models are presented in Table 5 , Table 6 respectively, with the lowest and optimal metric highlighted in bold. Interestingly, we see different outcomes for both CRT and CAD in the presence of epistemic and aleatoric uncertainty. The results indicate that the Confidence Weight model has a lower ECE than the baseline model for epistemic uncertainty, as can be seen in Table 5 but a similar outcome and consistency was not seen in the presence of aleatoric uncertainty for CRT. Similar to CRT, the CAD results in Table 6 highlight the same outcomes. Interestingly it may imply the modelling of aleatoric uncertainty may need further refinement. However, one may also argue that the ECE value might not be an optimal metric to utilise to assess calibration performance as our application is in a high risk setting, and therefore the OE measure could be a more appropriate metric. However, analysing the OE, a consistent outcome across both applications was not seen but did match the representation on the reliability diagram. One noticeable outcome for the CRT application was that the Confidence Weight model did seem to handle uncertainty with noticeably lower OE values. Comparison of validation accuracy and calibration metric-based model selection In this section we continue to analyse our uncertainty-aware training methods by investigating two different approaches for model selection. We use only the CAD diagnosis application for this analysis due to its larger training and test set sizes. Most current research utilises the highest validation accuracy to identify the best/optimal performing model (up until this point we have used BACC). However, in our work we aim to provide more evidence of the optimal uncertainty-aware model by investigating if different optimal models would be obtained if we instead used lowest validation ECE as the criterion for model selection. We chose ECE as it is still the most common and widely utilised calibration measure, even with its weaknesses ( Roelofs et al., 2022 ). We illustrate how the use of ECE and BACC as model selection criteria can affect the optimal performing model by indicating the test ECE and test BACC in Fig. 4 , Fig. 5 respectively. Here, the orange bars indicate the result when using validation ECE as the model selection criterion and the blue bars are the results when using validation BACC as the selection criterion.
Discussion In this paper we have proposed three novel uncertainty-aware training approaches, our Paired Confidence Loss from our preliminary investigation ( Dawood et al., 2021 ), a Probability Loss function and a Confidence Weight term. Three comparative state-of-the-art approaches were also evaluated, Accuracy versus Uncertainty Loss, Soft ECE and MMCE Loss. All six strategies were evaluated for two clinically realistic CMR-based classification problems with the aim of finding a preferred uncertainty-aware strategy that can promote clinical trust in a decision support setting. Specifically, we want to reduce confident incorrect predictions and improve confidence in correct predictions. In our work we utilised both accuracy and calibration measures to identify the best performing model and also investigated different approaches for model selection, using the highest validation BACC versus the lowest validation ECE. Model performance Overall, according to the most commonly used calibration metric (ECE), our novel Confidence Weight strategy performed the best across both the CRT and CAD applications. However, for the CAD diagnosis model, the MCE for one of the bins in the Confidence Weight strategy indicated a high calibration error of 0.84, which may be attributed to the large deviation away from ideal calibration for lower confidence samples. However, considering that our goal for a high risk application is to identify and reduce overconfident wrong predictions, these low confidence bins might be less important. In our setting, after analysing our results, we argue that the overconfidence error may be a better measure to evaluate uncertainty-aware training methods, focusing as it does on overconfident wrong predictions. By this measure, the best-performing models for the CAD diagnosis task are the Paired Confidence Loss and the Soft ECE Loss. However, our results highlight a fundamental difficulty with assessing model calibration using a single metric such as ECE. Specifically, our results tend to indicate that the calibration metrics do not completely agree. As an example, for the CAD diagnosis problem the ‘best’ model according to ECE actually increases the overconfidence error, maximum calibration error and Brier score. Likewise, on the same task the best-performing model according to Brier score is the Paired Confidence Loss, however this does not reduce ECE significantly. Analysis of the reliability diagram does allow us to explain some of the differences between ECE and Brier score, as Brier score is known to be insensitive to lower probabilities if fewer and infrequent samples lie within these bands ( Ovadia et al., 2019 ). Analysing the overconfidence error, which we believe has the potential to be more useful for high risk applications, we see that the Confidence Weight model is no longer the best-performing model when analysed as a stand-alone calibration metric. Model selection Interestingly, we found that for the Soft ECE loss and the Confidence Weight strategies the optimal performing model was not affected by the model selection criterion. When analysing the baseline and other uncertainty-aware strategies, a surprising result can be observed: choosing the model based on validation BACC yielded better ECE values but the accuracies achieved were lower. However, some of these differences were relatively small and so require further investigation. We also note that the AvUC method had an optimal model when utilising the validation ECE but had poorer performance if the best validation BACC was utilised. Our analysis suggests that the choice of model selection criterion may be important for uncertainty-aware training methods, a point that we do not believe has been highlighted before in the literature. However, it appears that there is no single correct model selection measure that will consistently achieve good model calibration outcomes. Overall, we argue that the best approach may be to look at a range of model selection metrics and choose the model that maximises both accuracy and calibration, with the calibration metric(s) being chosen to suit the context of the intended application. Limitations and future work In our work we made use of Softmax probabilities, which are widely utilised and accepted but are known to be less calibrated estimates of uncertainty ( Gupta et al., 2020 ). Additionally, our VAE architecture using multiple time-based image stacks may have prevented robust estimates of uncertainty and limited calibration performance. In future work we will aim to incorporate alternative direct methods of uncertainty estimation during training of DL models, to reduce overestimation and underestimation of confidence, which is known to be an ongoing research problem within the field of uncertainty estimation and model calibration. Future work will also focus on more extensive investigation and analysis of uncertainty-aware training methods for a wider range of clinical problems. We will investigate the development of alternate calibration metrics which are more tuned to specific (clinical) contexts and/or are less biased and more applicable to the healthcare setting. Furthermore we will investigate alternate architectures for quantifying uncertainty in a robust manner as well as alternate strategies for improving calibration such as focal loss ( Kumar and Sarawagi, 2019 ). Additionally, we plan to investigate the impact of label smoothing ( Carse et al., 2022 ) on our uncertainty-aware approaches. In this paper we chose to focus on uncertainty-aware training methods, rather than approaches that alter the training labels, but we note that label smoothing approaches could be combined with any uncertainty-aware training method, and the interaction of these two approaches should be thoroughly investigated. We will also investigate the possibility of using other calibration metrics, such as overconfidence error, for model selection, rather than BACC and ECE as we have investigated in this paper. In addition, we believe that it is important to evaluate the impact of AI on clinical workflows in a decision support setting, and the importance of model calibration on this impact. Future work will also focus on this area.
Conclusion In summary, we have investigated a range of different calibration metrics to assess our uncertainty-aware training methods. In terms of the most commonly used calibration metric (ECE), the Confidence Weight approach resulted in the best-calibrated models. However, we highlighted that the choice of best model would vary depending on the metric used. We have argued that overconfidence error might be the most appropriate metric for high risk medical applications, and in terms of overconfidence error the best-performing models were the Paired Confidence Loss term and the Soft ECE loss. Overall our analysis indicated that the goal of trying to improve deep learning model calibration for cardiac MR applications was achieved but only in terms of some calibration metrics. The results further highlighted the potential weakness of current measures and indicated the need to continue to investigate and identify robust metrics for high risk healthcare applications rather than simply using ECE and BACC ( Gupta et al., 2020 ), bearing in mind that the most relevant metrics may not be the same for different applications. Further research into uncertainty-aware training for optimising different (combinations of) metrics is also recommended.
Quantifying uncertainty of predictions has been identified as one way to develop more trustworthy artificial intelligence (AI) models beyond conventional reporting of performance metrics. When considering their role in a clinical decision support setting, AI classification models should ideally avoid confident wrong predictions and maximise the confidence of correct predictions. Models that do this are said to be well calibrated with regard to confidence. However, relatively little attention has been paid to how to improve calibration when training these models, i.e. to make the training strategy uncertainty-aware . In this work we: (i) evaluate three novel uncertainty-aware training strategies with regard to a range of accuracy and calibration performance measures, comparing against two state-of-the-art approaches, (ii) quantify the data (aleatoric) and model (epistemic) uncertainty of all models and (iii) evaluate the impact of using a model calibration measure for model selection in uncertainty-aware training, in contrast to the normal accuracy-based measures. We perform our analysis using two different clinical applications: cardiac resynchronisation therapy (CRT) response prediction and coronary artery disease (CAD) diagnosis from cardiac magnetic resonance (CMR) images. The best-performing model in terms of both classification accuracy and the most common calibration measure, expected calibration error (ECE) was the Confidence Weight method, a novel approach that weights the loss of samples to explicitly penalise confident incorrect predictions. The method reduced the ECE by 17% for CRT response prediction and by 22% for CAD diagnosis when compared to a baseline classifier in which no uncertainty-aware strategy was included. In both applications, as well as reducing the ECE there was a slight increase in accuracy from 69% to 70% and 70% to 72% for CRT response prediction and CAD diagnosis respectively. However, our analysis showed a lack of consistency in terms of optimal models when using different calibration measures. This indicates the need for careful consideration of performance metrics when training and selecting models for complex high risk applications in healthcare. Graphical abstract Highlights • Propose three novel uncertainty-aware training strategies. • Compare with state-of-the-art methods on two medical image classification tasks. • Report a wide range of accuracy and calibration performance measures. • Quantify performance of all models in terms of aleatoric and epistemic uncertainty. • Evaluate use of calibration-based model selection on accuracy and calibration. Keywords
Materials We performed two experiments utilising different materials for each. The first experiment focused on response prediction for CRT patients, and the second on diagnosis of CAD. See Table 1 for a summary of the data used in each experiment, which are further described below. Both experiments utilised CMR images as model inputs. In both, we train and evaluate the baseline model featuring a segmentation network followed by a VAE and classifier, and compare this with six different uncertainty-aware versions of the same model. CRT response prediction model We used two databases to train and evaluate our baseline and uncertainty-aware CRT response prediction models: (i) CMR SA stacks of 10,000 subjects (a mix of healthy and cardiovascular disease patients) from the UK Biobank (UKBB) dataset ( Petersen et al., 2015 ) and (ii) a database from the clinical imaging system of Guy’s and St Thomas’ NHS Foundation Trust (GSTFT) consisting of 20 heart failure (HF) patients and 73 CRT patients. The UKBB database was utilised to train the VAE, the HF patients for fine-tuning the segmentation model and the VAE and the CRT patients were used to train and evaluate the VAE and classifier. Further details are provided in Section 4.1 . Details of the UKBB database are provided in Section 3.2 . For the GSTFT database, all 73 CRT patients met the conventional criteria for CRT patient selection, chosen using current clinical guidelines based on New York Heart Association classification, left ventricular ejection fraction, QRS duration, the type of bundle branch block and etiology of cardiomyopathy and atrial rhythm ( Members et al., 2013 ). CMR imaging was performed prior to CRT and the CMR multi-slice SA stack was used in this study. The Siemens Aera 1.5T, Siemens Biograph mMR 3T, Philips 1.5T Ingenia and Philips 1.5T and 3T Achieva scanners were used to perform CMR imaging. The typical slice thickness was 8-10 mm, in-plane resolution was between and and the temporal resolution was 13–31 ms/frame. Using post-CRT echocardiography images (at 6 month follow up), a positive response was defined as a 15% reduction in left ventricular (LV) end-systolic volume. The HF patients had similar CMR imaging details to the CRT patients. For this experiment, for all datasets the top three slices of the SA stack were employed as the input to the models described in Section 2 . Ideally all slices should be utilised but for computational efficiency we only used three slices for prediction. We chose the basal to mid slices as these slices exhibit most myocardial deformation throughout contraction ( Jung et al., 2006 ). All slices were resampled in-plane to a voxel size of 1.25 × 1.25 mm, cropped to 80 × 80 pixels, and temporally resampled to time samples as per the same process utilised by Puyol-Antón et al. (2020) , before being used for training/evaluation of the models. CAD diagnosis model For the CAD diagnosis model all images were extracted from the UKBB. Images were obtained on a 1.5 T MRI scanner (MAGNETOM Aera, Siemens Healthcare, Erlangen, Germany). A typical CMR dataset consists of 10 SA image slices with a matrix size of 208 × 187 and a slice thickness of 8 mm, covering both ventricles from the base to the apex. The in-plane image resolution is 1.8 × 1.8 mm , the slice gap is 2 mm, with a repetition time of 2.6 ms and an echo time of 1.10 ms. Each cardiac cycle consists of frames, with further details on the image acquisition protocol described in Petersen et al. (2015) . For the CAD diagnosis experiment, we utilised 16022 UKBB subjects (14384 healthy and 1638 CAD subjects). As coronary occlusions can occur throughout the coronary tree we chose the middle three slices of the SA stack to cover the base, mid and apical portions of the heart for all subjects, similar to Clough et al. (2019) . All slices were cropped to 80 × 80 pixels and did not require any re-sampling. To follow a similar approach to the CRT experiment, only 25 time frames were utilised for the training of the models. Ethics Institutional ethics approval was obtained for use of the clinical data and all patients consented to the research and for the use of their data. All relevant protocols were adhered to in order to retrieve and store the patient data and related images. Experiments Below we describe the details of our two experiments. Please refer to Table 1 for summaries of the data used in each. Experiment 1 - CRT response prediction In the first experiment the task was to predict the binary response to CRT (positive/negative) using the pre-treatment CMR data. In order to train the framework for this task the following steps were performed: • Fine-tune the pre-trained segmentation model : The segmentation model ( Chen et al., 2020 ) was pre-trained using UKBB CMR data so to make it robust to the clinical GSTFT data it was fine-tuned using CMR data from the 20 GSTFT HF patients. The fine-tuning was carried out using 300 manually segmented CMR SA slices (multiple slices/time points from the 20 CMR scans). • Segment the UKBB and GSTFT CRT CMR data : The fine-tuned segmentation model was used to automatically segment all frames of the 10,000 UKBB subjects as well as the 73 GSTFT CRT subjects. (Note that this cohort of 10,000 UKBB subjects was separate from the UKBB data used to initially train the segmentation model.) • Train the VAE : The VAE was pre-trained using the U-net segmented UKBB data and fine-tuned using the ground truth segmentations of the GSTFT HF data. • Train the VAE and classifier together : We then used the U-net segmented CRT data to train the VAE and CRT classifier for 300 epochs similar to Puyol-Antón et al. (2020) . For training each uncertainty-aware method, the fine-tuned VAE model was used, the uncertainty-aware loss function or weighting introduced and then both the VAE and CRT classifier trained for 300 epochs using the U-net segmentations of the 73 CRT patients. In this experiment, the framework was trained using a faster learning rate for the VAE and a slower rate for the CRT classifier ( ) to ( ), with a batch size of 8. For all approaches, the final model was selected as the one with the highest validation balanced accuracy (BACC) over the classifier training epochs. Both the CRT baseline and uncertainty-aware models were trained and evaluated using a 5-fold nested cross validation. For each of the 5 outer folds, an inner 2-fold cross validation was performed with grid search hyperparameter optimisation over a range of values. In these inner folds, the set of hyperparameters yielding the highest validation BACC was selected. The optimal hyperparameters were used to train a model (using all training data for that outer fold) and then applied to the held-out (outer) fold. This process was repeated for all outer folds. In this way, hyperparameter optimisation was performed using training data and the model was always applied to completely unseen data. Note also that the CRT data had not been used in pre-training either the segmentation model or the VAE. The hyperparameters optimised using grid search for the CRT response prediction model are presented (on the left) in Table 2 . The hidden layer size in the classifier was also optimised as a hyperparameter but all methods found an optimal size of 32. Experiment 2 - CAD diagnosis In this experiment the task was to diagnose (positive/negative) CAD from CMR images. A similar training procedure was followed as in Experiment 1, i.e. • Segment the UKBB CMR data : First, the U-net segmentation model ( Chen et al., 2020 ) (pre-trained on the separate UKBB cohort as in Experiment 1) was used to segment the 16,022 UKBB CMR stacks. Note that no fune-tuning was necessary for this experiment as it used only UKBB data. • Train the VAE : The VAE was pre-trained using the segmented UKBB CMR data for 60 epochs. • Train the VAE and classifier together : The classifier was introduced and trained for a further 35 epochs. For training the uncertainty-aware methods, the trained VAE was used and the classifier trained for an additional 35 epochs. In this experiment the framework was trained using the same learning rate for the VAE and classifier ( ) with a batch size of 25. As for the CRT experiment, the highest validation BACC was used for model selection. For validation a single training/validation/test split of 11535/1282/3205 subjects was employed (i.e. 16022 subjects in total, comprising 14384 healthy and 1638 CAD subjects, as detailed in Section 3.2 ). The same hyperparameters from the CRT application were optimised for the CAD diagnosis model using grid search. The final hyperparameters are presented (on the right) in Table 2 . Similar to the CRT classifier all methods had an optimal hidden layer of size 32 across all strategies. Additional hyperparameters for comparative approaches In addition to the hyperparameters in Table 2 , the AvUC loss function utilised hyperparameters stated in the paper by Krishnan and Tickoo (2020) . Specifically, a warm up strategy was employed, starting with the uncertainty threshold set to 1 and then updated every epoch after the first 3 epochs. The additional parameters utilised for the Soft ECE loss function were the same as those stated in Karandikar et al. (2021) . We fixed the number of bins to keep the search space manageable at a value of 15 and varied the soft_binning_temperature or value to obtain an optimal outcome at = 0.1 for CRT response prediction and = 0.01 for CAD diagnosis. The parameter is described in detail in the original paper, Karandikar et al. (2021) and is utilised as a parameter to scale the bins or soften them. Implementation details All models were trained on a NVIDIA A6000 48 GB GPU using an Adam optimiser. All data for both experiments was augmented with random flipping and rotations. The code 1 and implementation details is available for download and use. Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Tareen Dawood reports financial support was provided by NIHR Biomedical Research Centre at Guy’s and Saint Thomas’ NHS Foundation Trust and King’s College. Esther Puyol reports financial support was provided by Wellcome Trust.
Data statement The UKBB datasets presented in this study are publicly available and can be found in online repositories under approved research projects from https://www.ukbiobank.ac.uk/ . The GSTFT dataset cannot be made publicly available due to restricted access under hospital ethics and because informed consent from participants did not cover public deposition of data. Acknowledgements This work was supported by the Kings DRIVE Health CDT for Data-Driven Health and further funded/supported by the National Institute for Health Research (NIHR) Biomedical Research Centre at Guy’s and 10.13039/501100004941 St Thomas’ NHS Foundation Trust and 10.13039/100009360 King’s College London, United Kingdom . Additionally this research was funded in whole, or in part, by the 10.13039/100010269 Wellcome Trust, United Kingdom WT203148/Z/16/Z. For the purpose of open access, the author has applied a CC BY public copyright licence to any author accepted manuscript version arising from this submission. The work was also supported by the 10.13039/501100000266 EPSRC, United Kingdom through the SmartHeart Programme Grant (EP/P001009/1). This research has been conducted using the UK Biobank Resource under Application Number 17806. The views expressed in this paper are those of the authors and not necessarily those of the NHS, EPSRC, the NIHR or the Department of Health and Social Care.
CC BY
no
2024-01-16 23:43:49
Med Image Anal. 2023 Aug; 88:102861
oa_package/01/64/PMC10788705.tar.gz
PMC10788706
38221905
INTRODUCTION Portal vein thrombosis (PVT) is characterized by the intraluminal occurrence of thrombosis within the portal vein, encompassing its left and right hepatic branches and potentially extending into the splenic vein and superior mesenteric vein. This leads to consequential complete or partial obstruction of the portal vein’s blood flow, representing a clinically significant vascular complication that is particularly prominent among individuals afflicted with liver cirrhosis [ 1 ]. In this context, non-malignant PVT manifests in approximately 25% of cirrhotic patients, presenting significant challenges to prognosis and clinical management [ 2 ]. The clinical consequences of PVT are extensive, encompassing mortality, hemorrhage, ascites, acute kidney injury and post-liver transplantation, emphasizing the critical importance of early detection and intervention, particularly in the setting of liver cirrhosis [ 3 , 4 ]. Research has indicated that risk factors such as the widening of the portal vein diameter, deceleration of portal vein velocity, poor liver function and other variables contribute to the development of liver cirrhosis and PVT [ 5 , 6 ]. Despite this, the precise cause and mechanism of PVT remain elusive. Consequently, regular monitoring and early detection of PVT are crucial. Timely implementation of active anticoagulation therapy has demonstrated a significant improvement in prognosis and outcomes for affected individuals. Currently, we still lack an ideal diagnostic method for PVT that is repeatable, easily accessible and secure and imposes minimal financial or psychological burdens on patients. To date, imaging continues to be a central component in diagnostic methodologies for PVT. Initial screening and diagnosis are often facilitated through ultrasound, prized for its non-invasiveness and relatively high accuracy [ 7 ]. However, the interpretative reliability of ultrasound findings can be influenced by a constellation of factors, including the patient’s physiological status, potential vascular anomalies and the expertise of the ultrasonographer. To achieve definitive diagnosis, computed tomography (CT) and magnetic resonance imaging (MRI) are deployed, each with its unique attributes and limitations [ 8 , 9 ]. CT, while effective, is encumbered by concerns surrounding radiation exposure and potential nephrotoxicity linked to contrast agent administration [ 8 ]. Conversely, MRI, owing to its radiation-free profile and heightened sensitivity, presents an appealing alternative, albeit tempered by considerations of cost and quality variability [ 9 ]. Complementary to imaging, hematology and coagulation blood tests stand as fundamental diagnostic pillars, providing indispensable insights into PVT risk assessment and diagnosis [ 10 ]. These tests encompass a gamut of parameters, such as hemoglobin, platelet count, prothrombin time, activated partial thromboplastin time and D-dimer levels, as well as liver and renal function markers [ 11 , 12 ]. While the diagnostic methods discussed are indeed pivotal, the intricate interplay of diverse clinical and laboratory factors underscores the necessity for a comprehensive data-driven precision medicine model. Presently, the majority of prediction models in PVT among chronic cirrhosis patients are built on single-center data. This approach results in diverse clinical outcomes due to variations in patient genetics and living environments throughout the country. Machine learning, a multidisciplinary field, can enhance clinical prediction models by leveraging its ability to simulate human learning behavior, continuously refining knowledge structures for improved performance. This model is crucial in achieving a higher degree of precision and individualization when predicting the risk of PVT in patients grappling with chronic cirrhosis. Early diagnosis of cirrhotic PVT holds paramount importance. Yet, it is notable that the landscape lacks multicenter-based studies equipped to accurately predict the risk of developing cirrhotic PVT. In this context, the development of such a model becomes not only advantageous but also indispensable. Within the dynamic landscape of modern medical science, data-driven precision medicine models have been applied in disease diagnosis and outcome predictions, such as individualized pair analysis (iPAGE) [ 13 , 14 , 15 , 16 , 17 ], least absolute shrinkage and selection operator (LASSO) [ 18 , 19 , 20 ] and deep neural networks [ 21 , 22 ]. As precision medicine is catalyzing profound transformations in healthcare paradigms, our study takes on a mission of paramount significance. We embark on the journey to bridge a critical diagnostic gap by harnessing advanced modeling techniques, including Support Vector Machine (SVM), Naïve Bayes, Quadratic Discriminant Analysis (QDA) and SHapley Additive exPlanations (SHAP). These models are meticulously integrated into a sophisticated data-driven precision medicine framework poised to revolutionize our approach to diagnosing and managing PVT in the context of chronic cirrhosis. We further trained the model with multicenter data to achieve higher accuracy. We further trained the model with multicenter data to achieve higher accuracy. Our overarching objective is to reshape diagnostic precision, enabling more effective clinical decision-making. Through this pioneering effort, we aim to fulfill an urgent need in the management of this multifaceted medical condition. The diagnostic prediction of PVT in chronic cirrhosis patients using a data-driven precision medicine model stands as a transformative endeavor with the potential to significantly enhance patient care and optimize clinical outcomes.
MATERIALS AND METHODS Patient selection from two clinical cohorts Our patient cohort was carefully selected from two different clinical centers: the First Hospital of Lanzhou University ( n = 468) and Jilin Hepatobiliary Hospital ( n = 348) ( Figure 1 ). Patients diagnosed with decompensated chronic hepatitis cirrhosis between January 2016 and December 2021 were screened again according to cirrhosis treatment guidelines published in 2019 [ 23 ]. The diagnosis of cirrhosis was based on clinical, laboratory and ultrasound evidence, and the diagnosis of PVT was based on abdominal ultrasound and liver CT scan. Exclusion criteria were hepatocellular carcinoma, extrahepatic malignancy, prior transjugular intrahepatic portosystemic shunt (TIPS) treatment, partial splenic embolization, other abdominal surgery, use of drugs known to interfere with clotting and known hemostatic disorders other than cirrhosis. None of the patients in the First Hospital of Lanzhou University had undergone splenectomy, while all the patients in Jilin Hepato-Biliary Diseases Hospital had undergone splenectomy. All these studies were reviewed by the Ethics Committee of the First Hospital of Lanzhou University. Data collection To ensure a comprehensive dataset for our analysis and model development, we employed a retrospective data collection methodology. This comprehensive approach encompassed the collection of demographic information, including age and gender, as well as a wide array of clinical parameters. These parameters included the presence of portal emboli and an extensive panel of laboratory measurements. Hematological parameters such as white blood cell count (WBC, 10 9 /l), hemoglobin (HB, g/l) and platelet count (PLT, 10 9 /l) were meticulously recorded, detected by Mindray Fully Automatic Hematology Analyzer BC-6800 Plus. Albumin (ALB, g/L) and total bilirubin (TB, μmol/L) were recorded, detected by Beckman Coulter Chemistry Analyzer AU5800. Coagulation profile indicators like prothrombin time (PT, s), prothrombin activity (PTA, %), international normalized ratio (INR), fibrinogen (FIB, g/l) and activated partial thromboplastin time (APTT, s) were recorded, detected by Instrumentation Laboratory ACL TOP 750 LAZ. Radiological metrics were also systematically documented. This encompassed ascites, portal vein diameter (PVD, mm), splenic vein diameter (SVD, mm) and portal velocity (PV, cm/s), detected by Philips Doppler Ultrasonic Diagnostic Instrument EPIQ-5. Based on clinical data including ALB, TB, PT, ascites and hepatic encephalopathy judged and recorded by clinicians, the Child–Pugh score (CPS) was calculated. Univariate analysis Continuous variables were presented as the means and SDs, and for skewed continuous variables, medians and interquartile ranges were used. Categorical data were expressed as number and percentage. Tests between two groups were conducted using the t -test, Wilcoxon test or chi-square test, as appropriate. All statistical analyses were performed using SAS Studio. All P -values less than 0.05 were considered statistically significant. The predictable models The stacking model used in this study We combined the general features, blood, B-scan ultrasonography scan and cirrhosis in the Lanzhou cohort ( N = 468) as a training cohort. Then, we trained two machine learning models, SVM and Naïve Bayes classifier, based on the Lanzhou cohort. We analyzed the features applied in these two models using SHAP, which returned the importance of each feature. To further improve the performance, we selected the common features between the 10 most critical features adopted in these two models. These common features were fed into the QDA for training. Finally, we evaluated the performance of QDA and compared it to other machine learning methods on the Jilin cohort ( N = 348). SVM SVM is a supervised machine learning algorithm that tries to find a hyperplane to separate two classes with the largest margin [ 24 ]. Suppose the features of a training sample i were , such as PT and CPS in our case. The label of the sample was , where represents the sample with PVT and denotes a non-PVT sample. Suppose the hyperplane that can separate two classes was where was a set of weights for the features, b was a constant. We try to find the hyperplane such that when and when . Suppose and were two hyperplanes and . The distance between and was The main idea of SVM was to maximize the margin between two classes, which means maximizing the distance between and . That is to minimize . Thus, the problem can be expressed as which was a constrained optimization problem. By using Lagrangian, we can solve the problem that separate PVT and non-PVT subjects. Naïve Bayes The Naive Bayes method was a supervised machine learning algorithm that used for classification tasks, like text classification [ 25 ]. Suppose the features of a subject were , and is the label of the subject. The theorem of Naive Bayes methods is as follows: Using the naive conditional independence assumption that Thus, the equation is simplified to Since is constant given the input, we can remove the denominator from this equation: So far, the discussion has yielded the independent feature model, often known as the Naive Bayes model. This model is used with a decision rule in the Naive Bayes classifier. One typical approach is to choose the most likely hypothesis in order to reduce the likelihood of misclassification; this is known as the maximal a posteriori (MAP) decision rule. The corresponding Bayes classifier is the function that assigns a class label as follows: SHAP SHAP is based on Shapley values, which are a popular cooperative game theory strategy [ 26 ]. The original form of the Shapley value was used to fairly determine a player’s contribution to the final outcome of a game. Suppose we have a cooperative game where a set of players each collaborate to create some value. If we can measure the total payoff of the game, then the Shapley value reflects the marginal contribution of each participant. If we consider our machine learning model to be a game in which individual features ‘cooperate’ to generate an output, which is a model prediction, then we may credit the prediction to each of the input features. For example, assuming teamwork is needed to finish a project. The team, Q , has x members. The total value achieved through this teamwork is v = v ( Q ). The Shapley value, , is the fair share or payout to be given to each team member m, which is defined as For a given member, m , the summation is over all the subsets S , of the team, Q = {1,2,3,..., x },that one can construct after excluding m . In the above formula, k ( S ) is the size of S , v ( S ) is the value achieved by subteam S and v ( S ∪{ m }) is the realized value after m joins S . Applications of Shapley values can be found in numerous areas. They can be applied to machine learning to account for feature contributions, where the features are players or team members and the model predictions are expenditures for the game or team. QDA QDA is a powerful classification method based on modeling the distribution of the data [ 27 ]. Unlike linear discriminant analysis, QDA assumes different covariances for different classes, which allows the decision boundary to be quadratic and more flexible. Assume the training data are , where contains seven features, namely, PV, PT, PTA, PVD, APTT, age and CPS, represents the sample were PVT and for non-PVT. Then, the quadratic discriminant function was where and are the means of PVT and non-PVT classes, and are the covariance matrices of PVT and non-PVT, and are the prior probabilities of the two classes. The classification rule is The predicted class is the class k that maximizes the quadratic discriminant function . If , the subject is predicted to be PVT. Otherwise, the subject is non-PVT. Regularization is a common technique to improve the estimates by controlling the shrinkage of the individual class covariance matrix estimates toward the pooled estimate. The regularization hyperparameter of QDA was tuning in the training cohort using 5-fold cross validation. After the optimal regularization parameter was obtained, the QDA was trained on the entire training cohort. Evaluation and comparison In this study, we use the AUROC (area under the receiver operating characteristic curve) to evaluate the performance of our stacking model [ 28 ]. AUROC is a valuable metric for assessing the overall performance of a binary classification model, especially when dealing with imbalanced datasets or when the costs of false positives and false negatives are different. The ROC (receiver operating characteristic) curve is a graphical representation of a binary classification model’s performance across different threshold values. It depicts the true-positive rate (TPR; sensitivity) against the false-positive rate (FPR; 1-specificity) as the threshold for classifying positive instances varying. The TPR and FPR were calculated as follows: where TP is the number of true-positively classified samples, FN is the number of false-negatively classified samples, FP is the number of false-positively classified samples and TN is the number of true-negatively classified samples. We calculate the area under the ROC curve. It quantifies the model’s ability to discriminate between the positive and negative classes across all possible threshold values. The AUROC value ranges from 0 to 1, where a higher AUROC indicates better discrimination. Based on the AUROC, we compared our model with traditional machine learning methods including nearest neighbors, SVM, Gaussian process, decision tree, random forest, neural network, adaptive boosting, Naïve Bayes and LASSO.
RESULTS Differential analysis of clinical features in PVT patients We performed statistical comparisons of 13 continuous clinical parameters using either t -tests (for normally distributed continuous variables) or Wilcoxon tests (for skewed distribution continuous variables) as indicated in Table 1 . Among these parameters, two continuous variables exhibited a significant decrease in the PVT group in both cohorts ( P -value <0.05), i.e. HB and PTA. Conversely, three clinical variables displayed a significant increase in the PVT group. These variables encompass INR, CPS and PVD. These notable differences in clinical parameters highlight the potential for constructing predictive models based on these distinctive clinical features. Feature engineering in the Lanzhou cohort To accurately predict PVT in chronic cirrhosis patients, we built a stacking model combining three machine learning algorithms, SVM, Naïve Bayes and QDA ( Figure 2A ). We used the Lanzhou cohort ( N = 468) as the input that contains the general features, blood features, features from B-scan ultrasonography scan and cirrhosis grade. Principle component analysis shows that all these features cannot well separate PVT and non-PVT subjects ( Figure 2B ), neither the blood features and B-scan ultrasonography scan along ( Figure 2C and D ). We primarily classified PVT from non-PVT using 11 machine learning algorithms training in Lanzhou cohort and testing in Jilin cohort and found that SVM and Naïve Bayes classifier outperformed other algorithms ( Figure 2E ). Then, we applied SHAP to explicit the features that were adopted in linear SVM ( Figure 3A ) and Naïve Bayes ( Figure 3B ). PV, PT, PTA, PVD, APTT, CPS, HB, SVD, WBC and age were the top 10 most important features in SVM. PV, PVD, INR, APTT, PT, PTA, CPS, PLT and age were the top 10 most important features calculated by Naïve Bayes. Among the top 10 features in SVM and the top 10 features in Naïve Bayes, seven features were shared, i.e. PV, PT, PVD, PTA, APTT, age and CPS ( Figure 3C ). We analyzed the principal components of the seven common features ( Figure 3D ) and found that these features still cannot distinguish the PVT from non-PVT. We further explored the difference of the seven common features between PVT and non-PVT patients ( Figure 3E–K ). PTA, PV and APTT were slightly lower in the PVT patients than the non-PVT ones, whereas patients with higher PT, PVD, CPS and age were more likely to be PVT. Construction of PVT prediction model and validation in Jilin cohort Using the seven shared features cannot well separate PVT and non-PVT due to their non-linearity with PVT. To address this problem, we applied QDA to classify PVT based on the seven common features. The QDA was trained on the Lanzhou cohort and validated on the Jilin cohort. The score obtained by QDA show significant difference between PVT and non-PVT ( Figure 3L ). The QDA regularization parameter was tuned using 5-fold cross validation in training cohort and was set to 0.7 ( Figure 4A ). The final QDA model was trained on the entire training cohort and performed well in the validation cohort (AUROC = 0.865, Figure 4B ). We compared the QDA based on the seven features (our stacking model) with other machine learning methods based on all the collected features. It outperformed nearest neighbors, SVM, Gaussian process, decision tree, random forest, neural network, adaptive boosting, Naïve Bayes and LASSO ( Figure 4C ). We also explored the statistics of our stacking model, which achieved an accuracy of 73.2% and a recall of 82.0% ( Figure 4D ). To validate the effectiveness of QDA, we compared QDA with other machine learning methods based the seven features from Naïve Bayes and SVM ( Table 2 ). QDA was superior to other methods with AUROC of 0.865 and sensitivity of 0.820. Besides that, ablation experiments validated the effectiveness of the feature engineering of Naïve Bayes and SVM and the classifier of QDA ( Table 3 ). We conducted a thorough analysis of the age factor in our model. Consequently, we attempted to eliminate the general feature, namely, age, while retaining the blood and B-scan ultrasonography scan features from the seven common features (refer to Supplemental Table S1 available online at http://bib.oxfordjournal.org/ .). In the Jilin cohort, the performance of QDA improved with an AUROC of 0.870 when eliminating age in comparison to the one with all the seven features, while the precision, recall and accuracy are not decreased.
DISCUSSION PVT exerts a substantial impact on the quality of life and overall survival of affected individuals. Extensive research has underscored interventions initiated at earlier stages of the disease yielding significantly higher rates of successful recanalization [ 29 , 30 , 31 ]. Through a comprehensive analysis of clinical and laboratory data, we have meticulously crafted an advanced data-driven predictive model tailored specifically for the diagnosis of PVT in patients with chronic cirrhosis. In our study, we harnessed data from two medical centers in China to identify a core set of clinical indicators that prove pivotal for accurate PVT prediction. This set encompasses PV, PTA, PVD, APTT, patient age and CPS. Notably, our predictive model employs a sophisticated stacking approach, amalgamating SVM, Naïve Bayes and QDA, into a comprehensive framework. It is crucial to emphasize that our model relies exclusively on objective indicators, eliminating any potential influence from subjective factors. These objective indicators encompass portal and coagulation indices, markers of liver function and age—all readily accessible in a clinical setting without incurring additional costs or effort. Reduced PV has emerged as a pivotal risk factor for PVT in cirrhosis patients. This phenomenon contributes to the sluggish removal of clotting substances, heightened platelet–wall interactions and the occurrence of hypoxic injuries to vascular endothelial cells. Multiple studies consistently underscore that PVT incidence significantly rises when PV falls below the threshold of 15 cm/s [ 5 , [ 6 ]. Furthermore, cirrhotic patients often exhibit enlarged portal and splenic veins, compounding the likelihood of PVT development [ 32 , 33 ]. Research by Xu et al . introduced a predictive model for PVT, focusing on hepatitis B cirrhosis patients post-splenectomy. This model identified independent risk factors associated with PVT formation, including PVD, SVD and postoperative PLT changes [ 34 ]. Previous investigations have also suggested that factors such as splenic thickening, markedly reduced mean PV and the presence of diabetes may contribute to PVT risk [ 35 ]. The relationship between PT, PTA and PVT in cirrhosis patients is intricate. Traditionally, cirrhosis has been associated with a hypocoagulable state. However, recent evidence challenges this notion, revealing a delicate equilibrium between pro-coagulant and anticoagulant factors. Variations in this balance can lead to either hypercoagulation or hypocoagulation, and some PVT cases may even resolve spontaneously. Nonetheless, research has yielded inconsistent correlations between PVT and coagulation function in cirrhosis patients [ 36 ]. In our study, while INR, which reflects PT, showed an association with PVT, it was not integrated into the predictive model. Our findings also emphasize the significance of the CPS in diagnosing PVT in cirrhosis. Diminished liver function is closely tied to an increased risk of PVT. The liver’s role in synthesizing coagulation and fibrinolytic factors, as well as anticoagulant substances like protein S and protein C, contributes to this intricate relationship. Liver damage disrupts these processes, affecting the clotting balance. Patients with cirrhosis may exhibit either hypocoagulation or hypercoagulation due to dynamic imbalances [ 37–40 ]. Some studies have proposed predictive models for PVT resolution based on factors like liver disease severity, thrombus characteristics and treatment timing, although the relationship between PVT and liver function scores remains inconclusive [ 27 , 41 ]. The literature presents a lack of consensus regarding the impact of age on portal thrombosis. Various causes contribute to differing cirrhosis durations, and the influence of treatment on cirrhosis progression rates remains uncertain. Age has emerged as a crucial factor in PVT development, particularly in elderly patients who demonstrate heightened vulnerability. Aging introduces several mechanisms that increase the risk of PVT. Notably, oxidative stress and systemic inflammation, often associated with aging, facilitate the formation of atherosclerotic plaques, a well-established risk factor for venous thrombosis. Additionally, reduced physical activity in elderly individuals can lead to venous blood stasis, further elevating susceptibility to thrombotic events. Age-related declines in fibrinolytic activity and elevated levels of factors such as type I plasminogen activator inhibitor and platelet reactivity also play a role in augmenting the risk of venous thrombosis [ 42–44 ]. It is noteworthy that we conducted a detailed exploration of the age factor in our model. While age was initially included, its exclusion from the model—given the limited connection found in previous literature and clinical knowledge between age and portal thrombosis—did not significantly impact model efficiency. In fact, it even enhanced model performance, a noteworthy observation. During the development of our predictive model for PVT in cirrhosis patients, we explored several associated indicators that, ultimately, did not significantly contribute to the model’s efficiency. These indicators included HB, SVD, WBC, INR, gender, D-dimer and PLT. Lower HB levels in cirrhosis patients have been linked to PVT, likely due to spleen-related issues and increased HB breakdown. However, relying solely on HB levels for diagnosis proved suboptimal, consistent with our study’s findings. Additionally, the enlargement of the SVD, potentially resulting from PVT, resulted from the obstruction of splenic vein return flow due to PVT [ 45 ]. Prior research established that a SVD exceeding 8 mm serves as the optimal diagnostic threshold for PVT [ 46–48 ]. Furthermore, elevated WBC counts often accompany infections, reflecting the intricate connection between inflammation and coagulation [ 49 ]. In cirrhosis-related PVT, systemic inflammation plays a significant role. The liver’s direct blood supply from the intestines through the portal vein connects the gut microbiome with inflammation. Even small amounts of endotoxins from gut microorganisms can trigger persistent thrombosis in cirrhosis, exacerbating clot formation [ 50 ]. Conversely, PVT development can worsen intestinal and liver ischemic damage, increasing intestinal barrier permeability [ 51 ]. Although our study showed a correlation between elevated WBC and PVT, diagnostic effectiveness was limited due to counterbalancing influences from reduced WBC due to hypersplenism [ 52 ]. D-dimer, reflecting fibrinolytic function, tends to increase during thrombosis and may also increase during hypercoagulation, infection and inflammation, but is not consistently elevated in patients with stable thrombosis without hyperfibrinolysis. The predictive value of PVT progression and prognosis may be greater than that of diagnosis [ 53 ]. Finally, advanced liver cirrhosis, often accompanied by hypersplenism, frequently leads to thrombocytopenia. Nevertheless, in vivo markers of platelet activation indicate that cirrhosis patients possess highly active platelets, promoting increased activation, aggregation, adhesion and release factors that elevate PVT risk [ 35 , 54 ]. Consequently, platelet count was not included in our model, aligning with clinical observations. Notably, in our study, we employed a multifaceted modeling approach, including stacking, which is also known as stacked generalization or ensemble stacking. This machine learning technique amalgamates the predictions of several base models (learners) to forge a more potent and robust model. Stacking is renowned for its ability to enhance predictive performance in comparison to individual base models. By amalgamating diverse models, stacking effectively mitigates the limitations of any single model, thereby yielding more precise and resilient predictions. Our primary stacking model harnessed the strengths of both discriminant modeling and statistical modeling by incorporating SVM and Naïve Bayes classifier. This fusion allowed us to prioritize essential features effectively. Furthermore, by integrating QDA, we achieved a quadratic boundary instead of a linear classifier. The utilization of these stacked models notably improved our ability to discern PVT accurately. Limitations of this study should be acknowledged. Firstly, the set of clinical indicators used in the model, while informative, was not exhaustive. Additional relevant variables might exist that were not considered in this analysis, potentially impacting the accuracy and comprehensiveness of the predictive model. Secondly, differences in basic patient data between the two included centers may introduce variability in the dataset, influencing the model’s performance. The absence of follow-up data is another limitation, as it prevents the assessment of the model’s predictive capabilities over time. Lastly, the presence of selection bias in the data cannot be ignored. The patient cohorts from the two centers may not be fully representative of all cirrhosis patients, potentially affecting the model’s applicability to a broader population. In conclusion, our study provides a robust framework for predicting PVT in chronic cirrhosis patients. Using data-driven precision medicine techniques and a model combining SVM, Naive Bayes and QDA algorithms, we focused on essential clinical indicators, including PV, PTA, PVD, APTT,CPS. This tool aids clinicians in informed decision-making regarding chronic cirrhosis and PVT. Further research and validation are needed to enhance its clinical applicability.
Ying Li, Jing Gao and Xubin Zheng contributed equally to this work. Abstract Background Portal vein thrombosis (PVT) is a significant issue in cirrhotic patients, necessitating early detection. This study aims to develop a data-driven predictive model for PVT diagnosis in chronic hepatitis liver cirrhosis patients. Methods We employed data from a total of 816 chronic cirrhosis patients with PVT, divided into the Lanzhou cohort ( n = 468) for training and the Jilin cohort ( n = 348) for validation. This dataset encompassed a wide range of variables, including general characteristics, blood parameters, ultrasonography findings and cirrhosis grading. To build our predictive model, we employed a sophisticated stacking approach, which included Support Vector Machine (SVM), Naïve Bayes and Quadratic Discriminant Analysis (QDA). Results In the Lanzhou cohort, SVM and Naïve Bayes classifiers effectively classified PVT cases from non-PVT cases, among the top features of which seven were shared: Portal Velocity (PV), Prothrombin Time (PT), Portal Vein Diameter (PVD), Prothrombin Time Activity (PTA), Activated Partial Thromboplastin Time (APTT), age and Child–Pugh score (CPS). The QDA model, trained based on the seven shared features on the Lanzhou cohort and validated on the Jilin cohort, demonstrated significant differentiation between PVT and non-PVT cases (AUROC = 0.73 and AUROC = 0.86, respectively). Subsequently, comparative analysis showed that our QDA model outperformed several other machine learning methods. Conclusion Our study presents a comprehensive data-driven model for PVT diagnosis in cirrhotic patients, enhancing clinical decision-making. The SVM–Naïve Bayes–QDA model offers a precise approach to managing PVT in this population.
Supplementary Material
FUNDING This research was supported by National Natural Science Foundation of China (32370711), National Natural Science Foundation of China (32300554), Shenzhen Science and Technology Program (JCYJ20220530152409020), Shenzhen Medical Research Fund (A2303033), and Clinical Research Center for General Surgery of Gansu Province (20JR10FA661). DATA AVAILABILITY Data are available on reasonable request. Author Biographies Ying Li , MD, is a deputy chief physician of the First Hospital of Lanzhou University. Her research interests are in is hepatology, regenerative medicine. [email protected]. Jing Gao , PhD, MD, is a researcher at the Karolinska Institutet (Sweden), the University of Helsinki and Helsinki University Hospital (Finland). Her research interests include respiratory medicine,bioinformatics and global health. [email protected]. Xubin Zheng , PhD, is an assistant professor at School of Computing and Information Technology in the Great Bay University. He is also a researcher in the Great Bay Institute for Advanced Study. His research interests are in precision medicine, disease diagnosis and transcriptional regulation in cancer. [email protected]. Guole Nie PhD student of the First School of Clinical Medicine in the Lanzhou University; His research interests are in hepatology and regenerative medicine. [email protected]. Jican Qin is a research assistant at School of Computing and Information Technology in the Great Bay University. His research interests are in precision medicine and disease diagnosis. [email protected]. Haiping Wang , MPH,is a researcher at the First Hospital of Lanzhou University; research direction is biostatistics. [email protected]. Tao He , MD, is a clinician at Jilin Hepato-Biliary Diseases Hospital, Changchun, China, specializing in the treatment of chronic hepatitis, liver cirrhosis and liver cancer. [email protected]. Åsa M. Wheelock , PhD, is an associate professor and the head of the Respiratory Medicine Unit, Department of Medicine and Centre for Molecular Medicine at the Karolinska Institutet, Stockholm, Sweden. Her research interests involve molecular sub-phenotyping of heterogeneous diagnoses of obstructive lung disease, such as COPD, asthma and post-acute sequelae of COVID-19 (PASC) using multi-omics integration and systems medicine approaches. [email protected]. Chuan-Xing Li , PhD, is a researcher at Karolinska Institutet (Sweden). Her research interest lies in computational precision medicine and multiomics integration. [email protected]. Lixin Cheng , PhD, is a principal investigator of bioinformatics at Shenzhen People’s Hospital, Shenzhen, China. His research interests include modeling and algorithms for big data analysis in medicine and biology. [email protected]. Xun Li , PhD, MD, is a professor at the Department of General Surgery and Key Laboratory of Biotherapy and Regenerative Medicine of Gansu Province at the First Hospital of Lanzhou University (China). His research interests include hepatobiliary surgery, surgical endoscopy and liver transplation. [email protected].
CC BY
no
2024-01-16 23:43:49
Brief Bioinform. 2024 Jan 13; 25(1):bbad478
oa_package/57/7d/PMC10788706.tar.gz
PMC10788707
0
Expression of concern for ‘Synthesis of a Fe 3 O 4 @P4VP@metal–organic framework core–shell structure and studies of its aerobic oxidation reactivity’ by Zongcheng Miao et al. , RSC Adv. , 2017, 7 , 2773–2779, https://doi.org/10.1039/C6RA25820D .
The Royal Society of Chemistry is publishing this expression of concern in order to alert readers that concerns have been raised regarding the integrity of the XRD data in Fig. 2c and d. The Royal Society of Chemistry has asked the affiliated institution (Beijing University of Chemical Technology) to investigate this matter and confirm the integrity and reliability of the XRD data in Fig. 2c and d. An expression of concern will continue to be associated with this manuscript until we receive information from the institution on this matter. Laura Fisher 9th January 2024 Executive Editor, RSC Advances Supplementary Material
CC BY
no
2024-01-16 23:43:49
RSC Adv.; 14(4):2602
oa_package/38/f9/PMC10788707.tar.gz
PMC10788708
38226146
Introduction How to improve the utilization efficiency of heavy crude oil in the context of the continuous reduction of conventional crude oil is an important issue that the petroleum industry is highly concerned about. A major difficulty in producing and transporting the heavy crude oil is because of its high viscosity. Asphaltene is the main contributor to high viscosity of heavy oil. The problem of self-aggregation and precipitation caused by asphaltene components in heavy crude oil has always plagued the exploration, extraction, storage, transportation, and processing processes of the petroleum industry. 1–3 Therefore, studying the viscosity reduction of asphaltene can help improve the utilization efficiency of crude oil. 4–7 Asphaltene is composed of condensed polyaromatic rings, alkane chains, and heteroatoms such as nitrogen, oxygen, and sulfur atoms, etc. 8–10 While the majority of asphaltenes may have a common architectural structure containing a polycyclic aromatic core and peripheral aliphatic chains, their size and aromaticity vary considerably. Due to intermolecular interactions such as the π–π interaction, asphaltenes tend to self-associate forming nanoaggregates, and then nanoaggregates will further form the coagulate, which eventually leads to flocculation. 11–14 Some studies have shown that asphaltene aggregation is influenced by many factors such as the molecular structure and concentration of asphaltenes, temperature and pressure, and solvent types. 15,16 Among these factors, the molecular structures are of particular interest due to the great diversity; for example, the number and position of aromatic ring structures, polarity, alkyl side chains, molecular weight, and molecular symmetry, etc. In recent years many kinds of viscosity-reducing agents have been investigated, such as light oil, low molecular multifunctional molecules, alcohols, etc. Recent experimental work indicated that many of these additives do not affect asphaltene association. Most of the additives showed that only dilution affected the asphaltene viscosity. Hasan et al. studied the viscosity behavior of heavy crude oil when it is blended with alcohol of 10% and 20% by volume. 17 The presence of 10% alcohol causes viscosity reduction by almost 80% at 25 °C. Further addition of addition of alcohol can cause more viscosity reduction. They attribute these effects to the interactions between the hydroxyl functions and some functionalities of the asphaltenes. The blending of heavy crude oil with ethanol enhances the flowability of the heavy crude oil. Mortazavi-Manesh and Shaw et al. investigated the effect of diluents ( n -heptane, toluene, and toluene : butanone (50 : 50 vol%)) on the non-Newtonian behavior of Maya crude oil including shear thinning and thixotropy at temperatures from 258 to 333 K. 18 They concluded that toluene : butanone (50 : 50 vol%) is more effective in decreasing oil viscosity than two other diluents tested. Despite a lot of work having been done experimentally, the atomic-level details of the molecular aggregation behaviors of the heavy oil component under the interaction of viscosity-reducing agents were largely unknown, which cannot be directly observed experimentally. Notably, in the past few decades, experimental and theoretical work has made great efforts to understand the aggregation behavior of asphaltene molecules. In experimental terms, some instrumental characterization methods such as small angle neuron scattering (SANS), small angle X-ray scattering (SAXS), wide-angle X-ray scattering (WAXS), Rayleigh scattering, nanofiltration and dynamic light scattering (DLS)/photon correlation spectroscopy (PCS) were used to study the aggregation behavior of asphaltenes. 19–21 It was found that the size of the asphaltene aggregates depends to a large extent on the structure and composition of the asphaltenes. Moreover, the skewed parallel stacking of polycyclic nuclei within the asphaltene nanoaggregates are commonly proposed. 22–28 On the theoretical simulation aspect, the atomistic molecular simulations have been extensively carried out to study the interactions between asphaltene molecules and their molecular aggregate structures. The molecular structure effect, and the effect of salt ions, solvent molecules such as toluene and heptane, and temperatures on the asphaltene molecule aggregation behaviors were extensively explored by molecular dynamic simulations. 29–37 Very recently, the machine-learning approach was introduced to identify a reduced set of model molecules representative of the diversity of asphaltene by Pétuya et al. 38 The studies highlighted the complex and diverse effects of molecular polydispersity on the aggregation process of asphaltene. Their simulation results indicate that when studying the aggregation process of asphaltene, it is necessary to consider molecular polydispersity. The study reported by Javanbakht et al. also demonstrates the importance of polydispersity on asphaltene aggregation and provides a lower limit of approximately 375 molecules in such a mixture to represent the two stages of aggregation. 39 On the other hand, the dynamics of asphaltene molecules under shear field has also attracted interest in the research community. It was found that increasing shear force can enhance the aggregation rate of asphaltene, and the average steady-state flocculent size of asphaltene decreases with increasing shear force. Simultaneously increasing the concentration of asphaltene in the solution or reducing the ratio of toluene to heptane can increase the growth rate of flocs and the size of steady-state flocs. 40–43 Bahrami et al. conducted experiments on the effect of shear rate on the aggregation of asphaltene in heptane toluene mixtures at constant temperature and pressure, and the results showed that under the action of shear force, the aggregated particles formed by asphaltene were denser and the formation time was shortened. 44 Song et al. used dissipative particle dynamics to study the effect of shear field rate on the dispersion behaviors of asphaltene molecules in heptane. The simulation results showed that asphaltene molecules mostly stack face-to-face and T-shaped, and shear field can damage the stacking of asphaltene molecules to varying degrees. The effect of shear force on archipelagic asphaltene molecules is higher than that of continental asphaltene molecules, mainly due to the stretching of archipelagic asphaltene molecules by the shear field. 45 Nassar et al. believe that the decrease in viscosity of asphaltene in the shear field is due to the disruption of the bonding force between asphaltene molecules in the shear field, resulting in an increase in the dimer formation free energy. 46 In this study, the effects of toluene additive on the viscosity of model asphaltene molecules containing polycyclic cores in the presence of shearing field were investigated by molecular dynamics simulations. The main purpose of this study is to investigate how viscosity reduction additive (toluene) affect the viscosity and molecular interactions of asphaltene molecules with different topological structures. To achieve this goal, the dispersion behavior and viscosity properties of five structurally homologous “continental” model alphaltene molecules with different benzene ring arrangements, alkyl side chains, and heteroatoms was simulated using non equilibrium molecular dynamics with different toluene additive concentrations. Over 50 sets of simulation systems were constructed and each system was simulated over 60 ns. The atomic level insights of the molecular structure effect of asphaltene, the effect of toluene additive concentrations on the aggregation and dispersion behaviors and shearing viscosity properties of these model asphaltene were obtained.
Molecular simulation method and models Construction of molecular models The construction of the structural model of asphaltene compounds is based on the structure of coal asphaltene molecules measured by Schuler et al. using atomic force microscopy. 8,14 We selected five representative condensed rings with different benzene ring arrangements, namely O, I, T, Y, and L, as the nucleus, and then further introduced alkyl side chains and hetero atoms to construct 15 asphaltene molecular models. According to their molecular structure and composition, these molecular models can be divided into three categories: the PAHs-I0, PAHs-O0, PAHs-T0, PAHs-Y0, and PAHs-L0; the PAHs-I1, PAHs-O1, PAHs-T1, PAHs-Y1, and PAHs-L1 containing alkyl side chains, and the PAHs-I2, PAHs-O2, PAHs-T2, PAHs-Y2, and PAHs-L2 containing hetero atoms. All molecular structures are shown in Fig. 1 . In recent theoretical studies by Law et al. , 47 the effect of asphaltene molecular polydispersity was emphasized to properly simulate the properties such as aggregation behaviors of asphaltene systems. In the present study, we focused on the effect of local molecular structure effects such as aromatic core stacking pattern, alkyl side chains and heteroatom on the inter-molecular interactions, viscosity reduction and molecular dispersion behaviors upon the addition of solvent molecules under shear conditions. So, the five types of structurally homologous model alphaltene molecules with different benzene ring arrangements, alkyl side chains, and hetero atoms were used. To determine the relative position and orientation of asphaltene molecules, based on the flat structure of asphaltene molecules, the centroid (COM) of asphaltene molecules is first calculated. Then, the peak of the radial distribution function (RDF) is used to determine the distance distribution between the COM of the asphaltene. The COM distance of asphaltene molecules stacked face-to-face is the smallest, corresponding to the first peak of RDF. The COM distance of offset stacked asphaltene molecules is greater than that of face-to-face stacked asphaltene molecules, which corresponds to the position of the second peak of RDF. The COM of the “T-shaped” stacked asphaltene molecules is defined as the position of the third peak of RDF. The average distance (AD) between aggregate molecules also can be obtained through the radial distribution function. The AD is used to analyze the size and quantity of alphaltene molecular aggregates in the simulation box. Molecular dynamics simulations The simulation process of all asphaltene systems includes three steps: (I) in the initial simulation box, 40 asphaltene molecules of the same type are randomly distributed, and the initial density of the system is set to 0.6 g cm −3 , which is lower than the density of conventional asphaltene. (II) Calibration of density and energy of randomly generated simulation boxes (including detection of kinetic and potential energy). Each system was balanced through a 20 ns isobaric isothermal (NPT) ensemble simulation at a constant pressure of 1 bar, with a temperature maintained at 300 K. This gives the system an appropriate density and box size. (III) Perform 60 ns NVT simulation using a Nosé-Hoover thermostat at 300 K. In this stage of simulation, we applied a shear field with a shear rate of 1 × 10 −7 fs −1 to the simulation box, with a time step of 1 fs. (IV) The data collection stage after balancing. The atomic position, force, velocity, potential energy generated after the complete relaxation and equilibrium process (after 10 ns NVT simulation) of the system were collected for subsequent analysis. All MD simulations are executed using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) package. 48 The force field parameter selection is PCFF force field. PCFF is a fully atomic force field that can provide good accuracy in liquid properties (density and cohesive energy) and molecular conformation. 49 PCFF mainly consists of bonding and non-bonding potential energy terms, while non-bonding terms consist of long-range electrostatic interactions and short-range van der Waals (vdW) interactions, can accurately predict the structure and thermodynamic properties of petroleum components. 50–53 The Lennard-Jones potential describes the non-bonding interactions between these sites, with a cutoff value of 12.0 Å. The long-range Coulomb interaction is processed using the Particle Particle Particle Mesh (PPPM) algorithm, 54 with a convergence parameter of 10 −4 . The time step and time interval for collecting data are set to 1 fs and 1 ps, respectively. The temperature was set 300 K and the pressure was 1 bar in the simulation. In each simulation, 40 specific types of model asphaltene molecules and different numbers of toluene molecules are randomly placed in a cubic box. Periodic boundary conditions apply in all directions. The VMD software 55 was used for trajectory analysis and visualization. The velocity verlet algorithm is used for integration in MD simulation. To investigate the effect of toluene additive concentration on the aggregation and viscosity properties of asphaltene, we investigated the shear viscosity properties of asphaltene molecules with different structures under different toluene solvent concentrations. This work conducted molecular simulation studies on a total of 50 model systems ( Table 1 ) with different toluene additive concentrations. As shown in Fig. 2a , taking the NPT and NVT potential energy convergence diagram of PAHs-I0 system under the condition of toluene mass concentration of 10% as examples, it can be seen that the density and potential energy of all simulation systems have fully converged. For the calculation of viscosity, as shown in Fig. 2b , within a total simulation time of 60 ns, the trajectory of the first 10 ns shows significant viscosity fluctuations, this is not included in the trajectory analysis. Only the simulated trajectory of 50 ns after simulation is recorded and analyzed. At the same time, we compared the viscosity data obtained by averaging every 10, 50, 100, and 1000 steps, and found that the viscosity obtained by the four ways of recording data was completely convergent and the convergence values were almost consistent.
Results and discussions Effect of distribution of phenyl rings on the aggregation and viscosity of asphaltene Asphaltene is a key factor in the high viscosity of heavy oil. This section takes five molecules of the PAHs-0 series (PAHs-I0, PAHs-O0, PAHs-T0, PAHs-Y0, PAHs-L0) as examples to study the effect of the distribution pattern of benzene rings in the continental type asphaltene molecules on their viscosity properties. As shown in Fig. 3 , the simulation results show that there are significant differences in the viscosity values of asphaltene molecules with different benzene ring arrangements, indicating a benzene ring distribution pattern significantly affect the molecular viscosity property. At the same time, it can also be seen that when toluene solvent is added to the asphaltene molecular system, toluene molecules will destroy the aggregate structure of the asphaltene, leading to a decrease in viscosity. When the concentration of toluene solvent is 0 wt%, the maximum viscosity difference of the five asphaltene molecules is 69.2 cP. When the concentration of toluene solvent is 10 wt%, the viscosity difference increases to 75.7 cP. However, when the concentration of toluene increases to 20 wt%, the viscosity difference between them decreased to 38.8 cP. As the concentration of toluene solvent further increases, the viscosity difference between different configurations of asphaltene molecular systems is no longer change significantly, and the benzene ring distribution effect of asphaltene molecules is almost not seen at this time. At the same time, it can be observed that when the concentration of toluene solvent is 20 wt%, the viscosity of the five asphaltene systems decreases to a relatively stable point, indicating that asphaltene can reach the optimal dissolution state in 20 wt% toluene. It is worth noting that when the concentration of toluene solvent is in the range of 0 wt%–20 wt%, the viscosity order of the five asphaltenes is: PAHs-L0 > PAHs-T0 > PAHs-O0 > PAHs-I0 > PAHs-Y0. This result deviates from traditional views, as the current simulation results show that the viscosity of continental asphaltene with an O-type structure is lower than that of L-type and T-type asphaltene molecules. To understand this trend, we conducted Normalized Radial Distribution Function (NRDF) analysis on the aggregate structures formed by five asphaltene molecules in the PAHs-0 system. The NRDF is defined , where g ( r ) max is the maximum peak of the radial distribution function ( g ( r )). 54 The closer the convergence value of NRDF is to 1, the more disorderly the intermolecular stacking becomes. The closer the convergence value of NRDF is to 0, the more orderly the intermolecular stacking becomes. 56–59 As shown in Fig. 3b–f , when the concentration of toluene solvent is below 20 wt%, the converged NRDF for almost all asphaltenes is >0.5, indicating that the stacking of asphaltene molecules is relatively disordered. The order of NRDF is PAHs-L0 > PAHs-Y0 > PAHs-T0 > PAHs-I0 > PAHs-O0. The results show that PAHs-O0 type asphaltene molecules have a relatively ordered molecular stacking structure, which corresponds to the larger planar configuration of such molecules. However, the viscosity of O-type asphaltene is inconsistent with the trend of the normalized radial distribution function. By further analyzing the viscosity curve of the five PAHs-0 systems, it can be found that the slope of the viscosity decrease curve for the O-type asphaltene is the smallest with the increase of toluene solvent, while the slope of the viscosity decrease curve for Y-type asphaltene is the largest with the increase of toluene solvent. From this, it can be seen that toluene solvent has a significant impact on the viscosity properties of asphaltene molecules containing a large number of benzene rings in branching distribution pattern. As shown in Fig. 4 , PAHs-O0 is a typical continental asphaltene molecule with multi benzene ring fusing into a big-ring configuration, and its molecular movement is relatively smooth under shear. The distribution of benzene rings in the PAHs-Y0 is highly branched, and the movement of the molecule under shear exhibits significant steric hindrance. This phenomenon indicates that the distribution of phenyl rings in the asphaltene molecules affected the aggregation behavior and viscosity properties under external shear stress. The branching distributed benzene rings mainly causes molecular structural hindrance effect, which leads to an increase in viscosity under shear conditions. Branch chain effect This section conducted systematic molecular dynamics simulations on five types of asphaltene molecules belong to the PAHs-1 series to explore the effect of alkyl branching on the viscosity of asphaltene. The toluene additive effect is first analyzed. As shown in Fig. 4 , when the concentration of toluene solvent is below 20 wt%, the viscosity of PAHs-O1 asphaltene molecules is almost twice that of PAHs-O0 asphaltene molecules. When the toluene solvent exceeds 20 wt%, the viscosity difference between PAHs-O0 and PAHs-O1 rapidly decreases and tends to be consistent, indicating that the effect of side chains on the viscosity of asphaltene molecules can be almost ignored when the solvent exceeds 20 wt%. When the concentration of toluene solvent is below 20 wt%, the alkyl chains increase the molecular structure hindrance of asphaltene, leading to an increase in the viscosity of asphaltene. In order to further understand the effect of alkyl chains on the aggregation mode and the viscosity, the normalized radial distribution function and molecular stacking configuration of PAHs-1 type asphaltene molecules with toluene additive concentration at 20 wt% were analyzed. By comparing the PAHs-0 type and PAHs-1 type asphaltene molecules, as shown in Fig. 5 , at a concentration of 20 wt% toluene additive, the largest viscosity difference between PAHs-L0 and PAHs-L1 series asphaltene molecules is 30.2 cP, the largest viscosity difference between PAHs-I0 and PAHs-I1 series molecules is 27.8 cP, the largest viscosity difference between PAHs-Y0 and PAHs-Y1 series molecules is 19.4 cP, and the largest viscosity difference between PAHs-O and PAHs-T series molecules are 9.9 cP. These results show that the same alkyl chains have different effects on asphaltene molecules with different polyaromatic nuclei structures. As shown in Fig. 5b–f , except for the Y-type molecules, the maximum peaks of NRDFs of PAHs-1 molecules shifted to smaller value comparing to that of PAHs-0 molecules. This indicates that the presence of alkyl side chains reduces the stacking distance between asphaltene molecules. In addition, the convergence values of the NRDF of the PAHs-1 series are lower than those of PAHs-0 series. This indicates that branching alkyl chains increase the interaction between asphaltene molecules, making them more inclined towards orderly face-to-face stacking. Fig. 6 and S1 † showed the stacking configuration of asphaltene molecules in equilibrium state. It can be observed that toluene molecules have not entered the interior of the face-to-face stacking structure of asphaltene molecules, and only exist in the interstices between aggregates. Fig. 6a and b display typical packing structure snapshots of PAHs-L type asphaltene molecules, it can be seen that PAHs-L0 type asphaltene molecules exhibit planar stacking. However, due to the L-shaped distribution of benzene rings, they did not stack perfectly to form a face-to-face configuration, but instead showed offset π-stacking, as shown in the structural motifs in the enlarged image. In contrast, the polyaromatic core of PAHs-L1 type asphaltene molecules with alkyl branched chains exhibit more perfect face-to-face stacking, including head-to-head stacking and head-to-tail stacking. The similar side-chain effect was also seen in other systems. As shown in Fig. 6c and d , for PAHs-T0 type asphaltene molecules, the central part of polyaromatic cores form face-to-face stacking conformation, while the other branched benzene rings exhibit random arrangement. For PAHs-T1 type asphaltene molecules containing side chains, they have the same dense ring core face-to-face stacking structure, but alkyl side chains hinder the movement of PAHs-T1 type asphaltene molecules, resulting in more face-to-face stacking configurations. The snapshots of PAHs-I type asphaltene molecules are shown in Fig. S1, † where the presence of branched chains enhances the interaction between PAHs-I type asphaltene molecules. PAHs-I1 type asphaltene molecules exhibit typical face-to-face stacking. For the PAHs-Y type asphaltene, as shown in Fig. S1, † due to the highly branched benzene ring in PAHs-Y0 asphaltene molecules, a poor stacking order is formed. Under the action of alkyl branched chains, the interaction between PAHs-Y1 type asphaltene molecules is enhanced, and the face-to-face stacking configuration increases obviously. The snapshot of PAHs-O type asphaltene molecules exhibits a unique stacking conformation. As shown in Fig. S1, † PAHs-O0 type asphaltene molecules form a long-range face-to-face stacking structure. In contrast, PAHs-O1 type asphaltene molecules cannot form long-range face-to-face stacking due to steric hindrances of branched chains, but the distance between asphaltene molecules is shortened. Based on the above results, the branching alkyl chains enhanced the interactions between PAHs-1 type asphaltene molecules. The face-to-face stacking configuration is increased for most PAHs-1 type molecules. Heteroatoms effect This section conducted molecular dynamics simulations on the PAHs-2 system to investigate the effect of hetero atoms on the aggregation behavior and viscosity of asphaltene molecules. As shown in Fig. 7 , during the process of increasing the toluene additive to 10 wt%, the slope of viscosity decrease of PAHs-O2 asphaltene is greater than that of PAHs-O1 asphaltene. When the concentration of toluene additive is between 10 wt%–30 wt%, the viscosity decrease rate of PAHs-O1 asphaltene and PAHs-O2 asphaltene is similar. When the concentration of solvent exceeds 40 wt%, the two systems show the similar viscosity value. Therefore, the hetero atoms effect is explored through comparison of the viscosity, normalized radial distribution function, and trajectory of PAHs-1 system and PAHs-2 system with 20 wt% toluene additive. As shown in Fig. 8a , when the concentration of toluene additive is 20 wt%, the viscosity difference between PAHs-1 and PAHs-2 ranges from 51.1 cP to 69.6 cP. For comparison, the viscosity difference between PAHs-0 and PAHs-1 asphaltene molecules is only 9.9 cP to 27.8 cP. This shows that the introduction of hetero atoms has a much greater impact on the viscosity of asphaltene molecules than the influence of branched chains on viscosity. Analyzing all NRDFs, as shown in Fig. 8b–f , the positions of the maximum peaks of NRDFs for asphaltene molecules containing hetero atoms have shift to smaller values comparing with the PAHs-1 asphaltene molecules, indicating that the introduction of hetero atoms further narrows the distance between asphaltene molecules, resulting in more face-to-face stacking. To verify that the hetero atom effect is the key reason of viscosity increase, the molecular polarity is analyzed. The dipole moments were calculated for all asphaltene molecules using the Gaussian 09 package 60 at the B3LYP/6-31g* theoretical level and the PCFF force field parameters (Table S1 † ). It can be seen that the dipole moment of the molecules in the PAH-2 series have the largest dipole moment due to introduction of hetero atoms. These results confirms that hetero atom effect is a key factor for the increase in viscosity of asphaltene. The molecular interactions were enhanced by the larger dipole moments caused by the introduction of hetero N, S atoms. We note that this finding agrees well with previous studies by Santos Silva et al. , 61 which indicated the heteroatom substitution on the conjugated core do not modify the shape of the nanoaggregate but change considerably the energy of interaction between asphaltene molecules. The simulation snapshot of the PAHs-2 system after NVT ensemble calculation is shown in Fig. S2. † It can be seen that the toluene molecules did not enter the interior of the face-to-face stacking of asphaltene molecules, only existing in the interstices of the aggregates. Comparing the snapshots of the PAHs-2 and PAHs-1 systems, it can be observed that the asphaltene molecules in the PAHs-2 system formed more perfect face-to-face stack configuration. It is particularly noteworthy that even the PAHs-Y2 type asphaltene molecules exhibit many face-to-face stacking configurations, indicating the hetero atoms can enhance the aggregation of asphaltene, but their effects on asphaltene molecules with different structures are different. Fracture recombination effect of asphaltene aggregates Under the action of shear fields, the aggregates of asphaltene molecules exhibited fracture-recombination behaviors. By investigating the MD trajectory of asphaltene molecules in the box within 50 ns, it can be clearly observed that under the combined action of shear field and toluene additive, the aggregates of asphaltene molecules exhibit continuous aggregation and fragmentation processes. Fig. 9 schematically illustrates the typical process of fragmentation and recombination of asphaltene molecular aggregates, which includes the formation of smaller molecular aggregates under shear and toluene interactions, and the mutual attraction and recombination of smaller molecular aggregates to form new aggregates. The aggregation of asphaltene molecules can be quantified based on their distance as a standard. Currently, three distance standards are commonly used: (1) the distance between the closest atoms on two adjacent molecules; (2) the distance between an atom in two adjacent molecules; and (3) the distance between the centroids (COM) of two adjacent molecules. The first or third criterion is used the most. In a recent study carried out by Ghamartale et al. , 62 the distance between the closest atoms was used and the z-averaged aggregation numbers, g z , was used to calculate the aggregate size. The authors also discussed the suitable criteria that predict the aggregates for a certain type of molecules. In this study, we used the distance between the centroids (COM) of the aromatic cores of two adjacent molecules and applied a cutoff threshold of 0.47 nm as the standard for asphaltene aggregation, which was previously used to counter the nanoaggregate of model asphaltene molecules. 63 Here, we take the PAHs-O series of asphaltene molecules as an example to analyze the number and size changes of asphaltene molecular aggregates under shear under the condition of 20 wt% toluene. As shown in Fig. 10 , comparing PAHs-O0, PAHs-O1, and PAHs-O2, the analysis results show that the stability of the number of asphaltene aggregates in the box is relative. Throughout the simulation process, the number of aggregates in PAHs-O0, PAHs-O1, and PAHs-O2 has been constantly changing. Among them, the number of aggregates of PAHs-O0 asphaltene molecules fluctuates the most significantly, and the size change of the largest aggregate is significant. The number of aggregates of PAHs-O1 asphaltene molecules remains within a relatively stable range, and the presence of alkyl branched chains significantly leads to a decrease in the size of asphaltene aggregates. The number of aggregates of PAHs-O2 asphaltene molecules is relatively stable, and the size of the largest asphaltene aggregate is also the most stable. This indicates that alkyl branched chains weaken the intermolecular motion of asphaltene, while polar molecules further weaken the intermolecular motion of asphaltene. This may also be the reason why the viscosity of PAHs-O2 > PAHs-O1 > PAHs-O0. At the same time, in the presence of alkyl chains and hetero atoms, the maximum aggregate size of asphaltene will further increase, which is the most direct evidence of hetero atoms promoting the self-aggregation behavior of asphaltene in the shear field.
Results and discussions Effect of distribution of phenyl rings on the aggregation and viscosity of asphaltene Asphaltene is a key factor in the high viscosity of heavy oil. This section takes five molecules of the PAHs-0 series (PAHs-I0, PAHs-O0, PAHs-T0, PAHs-Y0, PAHs-L0) as examples to study the effect of the distribution pattern of benzene rings in the continental type asphaltene molecules on their viscosity properties. As shown in Fig. 3 , the simulation results show that there are significant differences in the viscosity values of asphaltene molecules with different benzene ring arrangements, indicating a benzene ring distribution pattern significantly affect the molecular viscosity property. At the same time, it can also be seen that when toluene solvent is added to the asphaltene molecular system, toluene molecules will destroy the aggregate structure of the asphaltene, leading to a decrease in viscosity. When the concentration of toluene solvent is 0 wt%, the maximum viscosity difference of the five asphaltene molecules is 69.2 cP. When the concentration of toluene solvent is 10 wt%, the viscosity difference increases to 75.7 cP. However, when the concentration of toluene increases to 20 wt%, the viscosity difference between them decreased to 38.8 cP. As the concentration of toluene solvent further increases, the viscosity difference between different configurations of asphaltene molecular systems is no longer change significantly, and the benzene ring distribution effect of asphaltene molecules is almost not seen at this time. At the same time, it can be observed that when the concentration of toluene solvent is 20 wt%, the viscosity of the five asphaltene systems decreases to a relatively stable point, indicating that asphaltene can reach the optimal dissolution state in 20 wt% toluene. It is worth noting that when the concentration of toluene solvent is in the range of 0 wt%–20 wt%, the viscosity order of the five asphaltenes is: PAHs-L0 > PAHs-T0 > PAHs-O0 > PAHs-I0 > PAHs-Y0. This result deviates from traditional views, as the current simulation results show that the viscosity of continental asphaltene with an O-type structure is lower than that of L-type and T-type asphaltene molecules. To understand this trend, we conducted Normalized Radial Distribution Function (NRDF) analysis on the aggregate structures formed by five asphaltene molecules in the PAHs-0 system. The NRDF is defined , where g ( r ) max is the maximum peak of the radial distribution function ( g ( r )). 54 The closer the convergence value of NRDF is to 1, the more disorderly the intermolecular stacking becomes. The closer the convergence value of NRDF is to 0, the more orderly the intermolecular stacking becomes. 56–59 As shown in Fig. 3b–f , when the concentration of toluene solvent is below 20 wt%, the converged NRDF for almost all asphaltenes is >0.5, indicating that the stacking of asphaltene molecules is relatively disordered. The order of NRDF is PAHs-L0 > PAHs-Y0 > PAHs-T0 > PAHs-I0 > PAHs-O0. The results show that PAHs-O0 type asphaltene molecules have a relatively ordered molecular stacking structure, which corresponds to the larger planar configuration of such molecules. However, the viscosity of O-type asphaltene is inconsistent with the trend of the normalized radial distribution function. By further analyzing the viscosity curve of the five PAHs-0 systems, it can be found that the slope of the viscosity decrease curve for the O-type asphaltene is the smallest with the increase of toluene solvent, while the slope of the viscosity decrease curve for Y-type asphaltene is the largest with the increase of toluene solvent. From this, it can be seen that toluene solvent has a significant impact on the viscosity properties of asphaltene molecules containing a large number of benzene rings in branching distribution pattern. As shown in Fig. 4 , PAHs-O0 is a typical continental asphaltene molecule with multi benzene ring fusing into a big-ring configuration, and its molecular movement is relatively smooth under shear. The distribution of benzene rings in the PAHs-Y0 is highly branched, and the movement of the molecule under shear exhibits significant steric hindrance. This phenomenon indicates that the distribution of phenyl rings in the asphaltene molecules affected the aggregation behavior and viscosity properties under external shear stress. The branching distributed benzene rings mainly causes molecular structural hindrance effect, which leads to an increase in viscosity under shear conditions. Branch chain effect This section conducted systematic molecular dynamics simulations on five types of asphaltene molecules belong to the PAHs-1 series to explore the effect of alkyl branching on the viscosity of asphaltene. The toluene additive effect is first analyzed. As shown in Fig. 4 , when the concentration of toluene solvent is below 20 wt%, the viscosity of PAHs-O1 asphaltene molecules is almost twice that of PAHs-O0 asphaltene molecules. When the toluene solvent exceeds 20 wt%, the viscosity difference between PAHs-O0 and PAHs-O1 rapidly decreases and tends to be consistent, indicating that the effect of side chains on the viscosity of asphaltene molecules can be almost ignored when the solvent exceeds 20 wt%. When the concentration of toluene solvent is below 20 wt%, the alkyl chains increase the molecular structure hindrance of asphaltene, leading to an increase in the viscosity of asphaltene. In order to further understand the effect of alkyl chains on the aggregation mode and the viscosity, the normalized radial distribution function and molecular stacking configuration of PAHs-1 type asphaltene molecules with toluene additive concentration at 20 wt% were analyzed. By comparing the PAHs-0 type and PAHs-1 type asphaltene molecules, as shown in Fig. 5 , at a concentration of 20 wt% toluene additive, the largest viscosity difference between PAHs-L0 and PAHs-L1 series asphaltene molecules is 30.2 cP, the largest viscosity difference between PAHs-I0 and PAHs-I1 series molecules is 27.8 cP, the largest viscosity difference between PAHs-Y0 and PAHs-Y1 series molecules is 19.4 cP, and the largest viscosity difference between PAHs-O and PAHs-T series molecules are 9.9 cP. These results show that the same alkyl chains have different effects on asphaltene molecules with different polyaromatic nuclei structures. As shown in Fig. 5b–f , except for the Y-type molecules, the maximum peaks of NRDFs of PAHs-1 molecules shifted to smaller value comparing to that of PAHs-0 molecules. This indicates that the presence of alkyl side chains reduces the stacking distance between asphaltene molecules. In addition, the convergence values of the NRDF of the PAHs-1 series are lower than those of PAHs-0 series. This indicates that branching alkyl chains increase the interaction between asphaltene molecules, making them more inclined towards orderly face-to-face stacking. Fig. 6 and S1 † showed the stacking configuration of asphaltene molecules in equilibrium state. It can be observed that toluene molecules have not entered the interior of the face-to-face stacking structure of asphaltene molecules, and only exist in the interstices between aggregates. Fig. 6a and b display typical packing structure snapshots of PAHs-L type asphaltene molecules, it can be seen that PAHs-L0 type asphaltene molecules exhibit planar stacking. However, due to the L-shaped distribution of benzene rings, they did not stack perfectly to form a face-to-face configuration, but instead showed offset π-stacking, as shown in the structural motifs in the enlarged image. In contrast, the polyaromatic core of PAHs-L1 type asphaltene molecules with alkyl branched chains exhibit more perfect face-to-face stacking, including head-to-head stacking and head-to-tail stacking. The similar side-chain effect was also seen in other systems. As shown in Fig. 6c and d , for PAHs-T0 type asphaltene molecules, the central part of polyaromatic cores form face-to-face stacking conformation, while the other branched benzene rings exhibit random arrangement. For PAHs-T1 type asphaltene molecules containing side chains, they have the same dense ring core face-to-face stacking structure, but alkyl side chains hinder the movement of PAHs-T1 type asphaltene molecules, resulting in more face-to-face stacking configurations. The snapshots of PAHs-I type asphaltene molecules are shown in Fig. S1, † where the presence of branched chains enhances the interaction between PAHs-I type asphaltene molecules. PAHs-I1 type asphaltene molecules exhibit typical face-to-face stacking. For the PAHs-Y type asphaltene, as shown in Fig. S1, † due to the highly branched benzene ring in PAHs-Y0 asphaltene molecules, a poor stacking order is formed. Under the action of alkyl branched chains, the interaction between PAHs-Y1 type asphaltene molecules is enhanced, and the face-to-face stacking configuration increases obviously. The snapshot of PAHs-O type asphaltene molecules exhibits a unique stacking conformation. As shown in Fig. S1, † PAHs-O0 type asphaltene molecules form a long-range face-to-face stacking structure. In contrast, PAHs-O1 type asphaltene molecules cannot form long-range face-to-face stacking due to steric hindrances of branched chains, but the distance between asphaltene molecules is shortened. Based on the above results, the branching alkyl chains enhanced the interactions between PAHs-1 type asphaltene molecules. The face-to-face stacking configuration is increased for most PAHs-1 type molecules. Heteroatoms effect This section conducted molecular dynamics simulations on the PAHs-2 system to investigate the effect of hetero atoms on the aggregation behavior and viscosity of asphaltene molecules. As shown in Fig. 7 , during the process of increasing the toluene additive to 10 wt%, the slope of viscosity decrease of PAHs-O2 asphaltene is greater than that of PAHs-O1 asphaltene. When the concentration of toluene additive is between 10 wt%–30 wt%, the viscosity decrease rate of PAHs-O1 asphaltene and PAHs-O2 asphaltene is similar. When the concentration of solvent exceeds 40 wt%, the two systems show the similar viscosity value. Therefore, the hetero atoms effect is explored through comparison of the viscosity, normalized radial distribution function, and trajectory of PAHs-1 system and PAHs-2 system with 20 wt% toluene additive. As shown in Fig. 8a , when the concentration of toluene additive is 20 wt%, the viscosity difference between PAHs-1 and PAHs-2 ranges from 51.1 cP to 69.6 cP. For comparison, the viscosity difference between PAHs-0 and PAHs-1 asphaltene molecules is only 9.9 cP to 27.8 cP. This shows that the introduction of hetero atoms has a much greater impact on the viscosity of asphaltene molecules than the influence of branched chains on viscosity. Analyzing all NRDFs, as shown in Fig. 8b–f , the positions of the maximum peaks of NRDFs for asphaltene molecules containing hetero atoms have shift to smaller values comparing with the PAHs-1 asphaltene molecules, indicating that the introduction of hetero atoms further narrows the distance between asphaltene molecules, resulting in more face-to-face stacking. To verify that the hetero atom effect is the key reason of viscosity increase, the molecular polarity is analyzed. The dipole moments were calculated for all asphaltene molecules using the Gaussian 09 package 60 at the B3LYP/6-31g* theoretical level and the PCFF force field parameters (Table S1 † ). It can be seen that the dipole moment of the molecules in the PAH-2 series have the largest dipole moment due to introduction of hetero atoms. These results confirms that hetero atom effect is a key factor for the increase in viscosity of asphaltene. The molecular interactions were enhanced by the larger dipole moments caused by the introduction of hetero N, S atoms. We note that this finding agrees well with previous studies by Santos Silva et al. , 61 which indicated the heteroatom substitution on the conjugated core do not modify the shape of the nanoaggregate but change considerably the energy of interaction between asphaltene molecules. The simulation snapshot of the PAHs-2 system after NVT ensemble calculation is shown in Fig. S2. † It can be seen that the toluene molecules did not enter the interior of the face-to-face stacking of asphaltene molecules, only existing in the interstices of the aggregates. Comparing the snapshots of the PAHs-2 and PAHs-1 systems, it can be observed that the asphaltene molecules in the PAHs-2 system formed more perfect face-to-face stack configuration. It is particularly noteworthy that even the PAHs-Y2 type asphaltene molecules exhibit many face-to-face stacking configurations, indicating the hetero atoms can enhance the aggregation of asphaltene, but their effects on asphaltene molecules with different structures are different. Fracture recombination effect of asphaltene aggregates Under the action of shear fields, the aggregates of asphaltene molecules exhibited fracture-recombination behaviors. By investigating the MD trajectory of asphaltene molecules in the box within 50 ns, it can be clearly observed that under the combined action of shear field and toluene additive, the aggregates of asphaltene molecules exhibit continuous aggregation and fragmentation processes. Fig. 9 schematically illustrates the typical process of fragmentation and recombination of asphaltene molecular aggregates, which includes the formation of smaller molecular aggregates under shear and toluene interactions, and the mutual attraction and recombination of smaller molecular aggregates to form new aggregates. The aggregation of asphaltene molecules can be quantified based on their distance as a standard. Currently, three distance standards are commonly used: (1) the distance between the closest atoms on two adjacent molecules; (2) the distance between an atom in two adjacent molecules; and (3) the distance between the centroids (COM) of two adjacent molecules. The first or third criterion is used the most. In a recent study carried out by Ghamartale et al. , 62 the distance between the closest atoms was used and the z-averaged aggregation numbers, g z , was used to calculate the aggregate size. The authors also discussed the suitable criteria that predict the aggregates for a certain type of molecules. In this study, we used the distance between the centroids (COM) of the aromatic cores of two adjacent molecules and applied a cutoff threshold of 0.47 nm as the standard for asphaltene aggregation, which was previously used to counter the nanoaggregate of model asphaltene molecules. 63 Here, we take the PAHs-O series of asphaltene molecules as an example to analyze the number and size changes of asphaltene molecular aggregates under shear under the condition of 20 wt% toluene. As shown in Fig. 10 , comparing PAHs-O0, PAHs-O1, and PAHs-O2, the analysis results show that the stability of the number of asphaltene aggregates in the box is relative. Throughout the simulation process, the number of aggregates in PAHs-O0, PAHs-O1, and PAHs-O2 has been constantly changing. Among them, the number of aggregates of PAHs-O0 asphaltene molecules fluctuates the most significantly, and the size change of the largest aggregate is significant. The number of aggregates of PAHs-O1 asphaltene molecules remains within a relatively stable range, and the presence of alkyl branched chains significantly leads to a decrease in the size of asphaltene aggregates. The number of aggregates of PAHs-O2 asphaltene molecules is relatively stable, and the size of the largest asphaltene aggregate is also the most stable. This indicates that alkyl branched chains weaken the intermolecular motion of asphaltene, while polar molecules further weaken the intermolecular motion of asphaltene. This may also be the reason why the viscosity of PAHs-O2 > PAHs-O1 > PAHs-O0. At the same time, in the presence of alkyl chains and hetero atoms, the maximum aggregate size of asphaltene will further increase, which is the most direct evidence of hetero atoms promoting the self-aggregation behavior of asphaltene in the shear field.
Conclusion The viscosity properties of 15 asphaltene molecules containing the homologous fused benzene rings under the action of shear fields at a shear rate of 1 × 10 −7 fs −1 to the simulation box were studied using the molecular dynamics simulations. The simulation results indicate the benzene ring distribution in the polycyclic core has a great impact on the viscosity of asphaltene molecules. The viscosity of L-type and T-type asphaltene molecules is higher than that of O-type and Y-type asphaltene molecules due to the significant steric effect of branched distributed benzene rings. The introduction of alkyl branched chains and hetero atoms enhances the interaction of asphaltene molecules, making them more inclined to form face-to-face ordered stacking, and the hetero atom effect is more significant in increase of viscosity.
Reducing the viscosity of heavy oil is beneficial to the process of oil recovery, so it is of great significance to explore the influence of different factors on the viscosity of heavy oil. In this study, molecular dynamics (MD) simulations were carried out to study the viscosity properties of 15 structurally homologous model polycyclic molecules under shear conditions and with a toluene additive with different concentrations. Over 50 sets of simulation systems were constructed and simulated in this work. The molecular structure effect including the phenyl ring arrangements, alkyl side chain decorations, and heteroatoms, as well as the solvent effect such as the concentration of the toluene additive was comprehensively studied. It was found that under the shear conditions, the more branched the benzene ring in the polycyclic hydrocarbon nucleus, the greater the molecular steric hindrance generated, resulting in higher viscosity compared to O-shaped polycyclic hydrocarbon nucleus molecules. The introduction of alkyl side chains and heteroatoms leads to increased intermolecular interactions and more face-to-face stacking configurations, resulting in an increase in viscosity. However, in comparison, the heteroatoms effect is more pronounced in intermolecular interactions and increases in viscosity. Molecular trajectory analysis further indicates the molecular aggregates undergo continuous fracture and recombination under shear interaction, which is related to the trend of changes in viscosity properties. The current research provides new atomic-level insights into the molecular motion of heavy oil components under shear interaction in the presence of a toluene additive. Molecular dynamics (MD) simulations were carried out to study the aggregation behavior and viscosity properties of 15 structurally homologous model asphaltene molecules under shear conditions and with a toluene additive with different concentrations.
Conflicts of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Supplementary Material
This work was supported by the National Natural Science Foundation of China (22008263), PetroChina Basic Research and Strategic Reserve Technology Research Fund Project (2019D-500807). Y. P. acknowledge the Science and Technology Innovation Program of Hunan Province (2023RC1055).
CC BY
no
2024-01-16 23:43:49
RSC Adv.; 14(4):2577-2589
oa_package/60/cb/PMC10788708.tar.gz
PMC10788710
38226149
Introduction Sunscreen is a vital product that helps protect our skin from the harmful effects of the sun's ultraviolet (UV) rays. It is a topical product that comes in various forms, such as lotions, creams, gels, sprays, and sticks. 1 The primary purpose of sunscreen is to shield our skin from both UVA and UVB rays, which can cause sunburn and premature aging, and increase the risk of skin cancer. UVA rays penetrate deep into the skin and contribute to skin aging, while UVB rays primarily affect the outer layers of the skin and are the main cause of sunburn. Sunscreen works by either absorbing or reflecting these harmful rays, preventing them from damaging the skin. The effectiveness of sunscreen is measured by its sun protection factor (SPF), which indicates the level of protection it offers against UVB rays. The higher the SPF, the greater the protection. 2 The difference in protection between SPF 30, 40, and 50 is relatively small, and no sunscreen can block 100% of UVB rays. It is recommended to use a broad-spectrum sunscreen with an SPF of 30 or higher to ensure adequate protection against both UVA and UVB rays. Applying sunscreen correctly is crucial for its effectiveness. It should be generously applied to all exposed areas of the skin at least 15 minutes before sun exposure. Reapplication is necessary every two hours, or more frequently if sweating or swimming, to maintain its protective effect. In addition to sunscreen, it is important to take other sun protection measures, such as seeking shade during peak sun hours, wearing protective clothing, and using sunglasses to shield the eyes from UV rays. 3 In addition, the chromophore-based sunscreen is an innovative approach to sun protection that utilizes specific molecules known as chromophores to absorb and dissipate UV radiation. 4 These chromophores are designed to selectively absorb UV light, providing effective protection against both UVA and UVB rays. Unlike traditional sunscreens that rely on physical or chemical filters to block or scatter UV rays, chromophore-based sunscreens work by absorbing the UV radiation and converting it into less harmful forms of energy, such as heat. 5,6 This mechanism allows for efficient and targeted protection against the damaging effects of the sun. It used in these sunscreens are carefully selected to have high absorption capabilities within the UV spectrum. 7–10 They are designed to absorb specific wavelengths of UV light, ensuring broad-spectrum protection. By absorbing UV radiation, chromophores prevent it from penetrating the skin and causing damage, such as sunburn, premature aging, and an increased risk of skin cancer. They have the potential for improved photostability, meaning they are less likely to degrade or lose their effectiveness when exposed to sunlight. This ensures that the sunscreen remains active for a longer duration, providing reliable protection throughout sun exposure. It is important to note that extensive research is conducted to ensure the safety and efficacy of chromophore-based sunscreens. Regulatory bodies closely monitor these products to ensure they meet stringent standards for consumer safety ( Fig. 1 ). 11–13 Moreover, Nanoparticles-based sunscreen is a revolutionary advancement in sun protection technology. These sunscreens utilize tiny particles, typically ranging from 1 to 100 nanometers in size, to provide enhanced protection against the sun's harmful UV rays. 14 The nanoparticles used in these sunscreens are often made of materials like titanium dioxide or zinc oxide. These materials can absorb, scatter, and reflect UV radiation, making them highly effective in shielding the skin from both UVA and UVB rays. 15–17 Titanium dioxide and zinc oxide are commonly used in sunscreen formulations because they are effective at blocking both UVA and UVB rays, they are non-irritating to the skin, and they are considered to be non-toxic. Other metal oxides, such as iron oxide and aluminum oxide, are not commonly used in sunscreen formulations for several reasons. First, iron oxide, while it can provide some UV protection, is not as effective as titanium dioxide and zinc oxide in blocking both UVA and UVB rays. Additionally, iron oxide can cause skin irritation in some individuals, making it less desirable for sunscreen formulations. Aluminum oxide, on the other hand, is not typically used in sunscreen formulations because it does not provide significant UV protection. It is more commonly used in other applications, such as abrasives and as a component in ceramics, rather than in sunscreens. Overall, titanium dioxide and zinc oxide are preferred in sunscreen formulations due to their effectiveness, safety, and lack of skin irritation, which are qualities that other metal oxides do not always possess. One of the key advantages of nanoparticles-based sunscreen is that it offers a transparent and lightweight formula. Unlike traditional sunscreens that can leave a white cast on the skin, nanoparticles-based sunscreens are designed to be virtually invisible when applied, providing a more aesthetically pleasing option. Additionally, these sunscreens offer improved photostability, meaning they are less likely to degrade or lose their effectiveness when exposed to sunlight. This ensures that the sunscreen remains active for a longer duration, providing reliable protection throughout sun exposure. 18 The advantage of being water-resistant, making them suitable for activities like swimming or sweating. They adhere well to the skin and maintain their protective barrier even when exposed to water or perspiration. It is important to note that extensive research has been conducted to ensure the safety of nanoparticles used in sunscreens. Regulatory bodies around the world have approved the use of these nanoparticles in sunscreen products, as they have been found to pose no significant risk to human health when used as directed. 19,20 This review focuses on the development of chromophore compounds and nanoparticles-based sunscreens and their applications.
Conclusion The development and application of Chromophore Compounds and Nanoparticles in sunscreens represent a significant stride in enhancing sun protection efficacy, photostability, and environmental sustainability. Each type of sunscreen has its own set of advantages and challenges, and the best choice often comes down to a balance between efficacy, safety, and cosmetic attributes. Metal oxide nanoparticles-based sunscreens, specifically those containing titanium dioxide and zinc oxide, are commonly preferred due to their proven broad-spectrum protection, reduced skin irritation, and improved cosmetic acceptability. They form a reliable and well-established choice for many individuals, particularly those with sensitive skin. Flavonoid and polymeric nanoparticle-based sunscreens show promise and may offer additional benefits, such as antioxidant properties and enhanced stability. However, these formulations are still undergoing research and development to optimize their effectiveness, address stability issues, and ensure safety for widespread use. If proven broad-spectrum protection, reduced skin irritation, and cosmetic acceptability are top priorities, metal oxide nanoparticle-based sunscreens may be the preferred choice. The incorporation of nanoparticles, notably zinc oxide and titanium dioxide, has revolutionized sunscreen formulations by offering broad-spectrum UV protection while maintaining an aesthetically pleasing appearance. Their ability to scatter and absorb UV radiation without leaving an unsightly residue on the skin is a pivotal advancement in the industry. Furthermore, the encapsulation of chromophore compounds within nanoparticles has shown promise in augmenting UV protection by selectively absorbing specific wavelengths of light. Advancements in photostability have resulted in sunscreens that endure longer periods of sun exposure, ensuring that the skin remains safeguarded throughout outdoor activities. The inclusion of antioxidants like vitamins C and E further fortifies the protective qualities of sunscreens, helping to neutralize the harmful free radicals generated by UV radiation. Sustainability has also been a central theme in recent research. The pursuit of eco-friendly, biodegradable sunscreen ingredients and formulations has gained momentum, in response to concerns regarding the environmental impact of sunscreen chemicals, particularly on coral reefs. Inclusivity in sun protection has been another focal point, with the development of sunscreens suitable for a wide range of skin tones, thereby addressing the diverse needs of the population. There isn't a universally agreed “best” sunscreen among benzophenone-based, nanoparticles-based, polymer-based, and flavonoid-based sunscreens. Each type has pros and cons. Benzophenone offers broad-spectrum UV protection but has hormone-disrupting concerns. Nanoparticles provide effective UV protection but have environmental and safety concerns. Polymer-based sunscreens are water-resistant but need more research on long-term effects. Flavonoid-based sunscreens have antioxidant properties but need more study on their stability. Research is focused on improving safety, efficacy, and environmental impact. When choosing, consider your skin type, conditions, and ethical concerns, and consult a professional for guidance. These innovations aim to offer more effective, environmentally responsible, and inclusive solutions for shielding the skin from the detrimental effects of UV radiation. Ultimately, the collaborative efforts of scientists, researchers, and the skincare industry contribute to a brighter future, where sunscreens become not only protective shields against the sun but also symbols of sustainable and inclusive care for our skin and the environment.
Sunscreen formulations have undergone significant advancements in recent years, with a focus on improving UV radiation protection, photostability, and environmental sustainability. Chromophore compounds and nanoparticles have emerged as key components in these developments. This review highlights the latest research and innovations in chromophore compounds and nanoparticle-based sunscreens. It discusses the role of nanoparticles, such as zinc oxide and titanium dioxide, in scattering and absorbing UV radiation while remaining cosmetically acceptable. Chromophore compounds, encapsulated in nanoparticles, are explored for their potential to enhance UV protection by absorbing specific wavelengths of light. Additionally, advances in photo-stability, broad-spectrum protection, antioxidant inclusion, and biodegradability are discussed. The evolving landscape of sunscreen technology aims to provide more effective and environment-friendly solutions for safeguarding skin from the sun's harmful effects. Sunscreen formulations have undergone significant advancements in recent years, with a focus on improving UV radiation protection, photostability, and environmental sustainability.
Benzophenone based sunscreen A co-precipitation process using alkaline conditions produced layered double hydroxides intercalated with dodecylbenzenesulfonate (1). Following PXRD, FTIR, and TGA/DTA analysis, several processes were used to react the Zn x Al/SUR compounds with neutral benzophenone. Before and after being exposed to UV light, the products made from benzophenone adsolubilization were examined by PXRD, FTIR, and DRUV-vis spectroscopy. Adsolubilized benzophenone generally had a low content and varied depending on the synthesis process. The microwave irradiation method produced the greatest results, yielding 9.09 weight percent of adsolubilized benzophenone. The products demonstrated strong resilience to UV radiation and good absorption over the whole UV spectrum, from UVC to UVA. They are suitable candidates for the creation of the next generation of sunscreens since they did not result in skin irritation in testing on rabbits ( Fig. 2a ). 21 Moreover, Benzophenones (2, BPs) are commonly used ultraviolet filters that have caused a great deal of public worry because of their possible ability to alter the endocrine system. They thoroughly explored the photochemical behavior and destiny of these organisms, which is mediated in aquatic settings by nitrate. The results showed that 2,4-dihydroxybenzophenone had a 31.6% mineralization rate after 12 hours of irradiation and that 10 μM of 3 BPs may be destroyed in 4 hours of simulated solar irradiation in a 10 mM nitrate solution at pH 8.0. Their photolytic rates ( k obs ) showed a substantial linear association with the log values of the concentration of nitrate for 0.1–10 mM, and in three real waters, the rates of BP were similarly significantly associated with the intrinsic nitrate content. Additionally, higher transformation rates under alkaline conditions were observed, especially for BP, whose kobs at pH 10 were 8.3 times higher than at pH 6.0. In addition, dissolved oxygen (DO) also has some impact on reaction kinetics. According to quenching experiments, the three reactive oxygen species (ROS), namely ̇OH, ̇NO, and ̇NO 2 , participated in this BP photolysis, and the contribution of ̇OH accounted for 32.1%. The model molecule used to examine the toxicity alterations and transformation routes in this system is BP. Based on examination of liquid chromatography quadrupole time-of-flight mass spectrometry data and density functional theory, four primary transformation pathways hydroxylation, nitrosylation, nitration, and dimerization were postulated. Photobacterium phosphoreum found the produced intermediates to be more harmful than the parent BP in the toxicity test. These findings therefore contribute to the elucidation of phototransformation processes and the assessment of the possible ecological hazards associated with BPs in aquatic settings ( Fig. 2b ). 22 The ingredients in sunscreen have been created to shield skin from UV rays. However, a lot of organic sunscreen ingredients include tiny molecules that are absorbed into the skin of people after topical use, causing systemic negative effects. Use a polymer and an organic sunscreen substance to reduce the side effects of traditional sunscreens. An organic sunscreen chemical called dioxybenzone (3) is chosen, and natural polymer pullulan is used to polymerize it. Dioxybenzone is given a lengthy polymer backbone via polymerization, which also maintains the distance between its benzene rings and prevents the photoabsorption intensity from decreasing. UV/vis spectrophotometry proved that the UV absorption patterns of dioxybenzone-pullulan polymer (DOB-PUL) and dioxybenzone (DOB) were identical. The Franz diffusion cell was used to assess the buildup of sunscreen components on the skin and establish that DOB accumulates whereas DOB-PUL does not. Most noteworthy, DOB demonstrated greater plasma concentration than DOB-PUL after numerous administrations ( Fig. 3a ). 23 The photodegradation of 4-OH-BP3 and BP-3 (4) was examined in freshwater, ocean, and pure water. The results reveal that neutral forms of BP-3 and 4-OH-BP3 resist photodegradation more than anionic forms do in pure water, and direct photodegradation of both exhibits considerable speciation dependency. In the meantime, both compounds' photoinduced transformation is significantly aided by indirect photodegradation caused by reactive species, particularly DOM. While 3 DOM* and *OH are primarily responsible for indirect photodegradation in freshwater, it is 3 DOM* that does so in saltwater ( Fig. 3b ). 24 In chlorinated bromide-rich water, the mutagenicity of four organic UV filters oxybenzone, dioxybenzone, avobenzone, and octyl methoxycinnamate was examined. The Ames test was used to assess the mutagenicity of Salmonella typhimurium TA98 without S9 mix. To clarify the mutagenic transformation products, high-resolution mass spectrometry was used in chemical analysis. Only dioxybenzone (5) of the tested UV filters showed a blatant carcinogenic activity after being chlorinated in seawater at a 1 : 10 ratio. When chlorine was introduced at greater concentrations, however, no mutagenic activity was seen. Mutagenic extracts included several brominated dioxybenzone transformation products, according to high-resolution mass spectrometry analyses. The transformation products' time course examination at increasing chlorine dosages revealed that they were unstable and vanished more quickly. Since no transformation products were found this instability explained dioxybenzone did not exhibit mutagenic activity when 1000-fold extra chlorine was applied. Discussion is had over applicable these findings are to the swimming pool environment. To assess the total effect of high amounts of chlorine on the overall mutagenicity, more research is required that takes into account the mutagenicity of both the final disinfection byproducts and the intermediate transformation products. This study emphasizes crucial it is to take into account organic UV filters' reactivity and the compounds they turn into while creating sunscreen formulas in cleaned recreational waters ( Fig. 4a ). 25 Additionally, benzophenone (6) is an endocrine disruptor, mutagen, and carcinogen. In the US, it is forbidden to have it in food or food packaging. Benzophenone is not allowed in any skincare products, including sunscreen, anti-aging creams, and moisturizers, under California Proposition 65. This study set out to find out whether benzophenone was a common ingredient in a variety of commercially available sunscreens with sun protection factors/sunscreen products, whether its concentration increased over time, and whether octocrylene degradation was most likely the source of benzophenone contamination. Eight commercial sunscreen products from the United States and nine from the European Union were each tested for benzophenone content in triplicate. Two sources of octocrylene were tested with only one component. The Food and Drug Administration of the United States accelerated stability aging technique was used to test these identical SPF products for 6 weeks. In the items that had aged more quickly, benzophenone was detected. Recent acquisitions of sixteen octocrylene-containing product lines exhibited an average benzophenone content of 39 mg kg −1 , ranging from 6 mg kg −1 to 186 mg kg −1 . It was not detectable in the product that did not contain octocrylene. After subjecting the 17 products to the U.S. FDA-accelerated stability method, the 16 octocrylene-containing products had an average concentration of 75 mg kg −1 , ranging from 9.8 mg kg −1 to 435 mg kg −1 . The substance that did not include octocrylene did not contain any benzophenone at all. The produced component for pure octocrylene contained benzophenone. Octocrylene undergoes a retro-aldol condensation to produce benzophenone. In real life, the skin may absorb up to 70% of the benzophenone included in these sunscreen creams. The U.S. FDA has created a zero-tolerance policy for the food ingredient benzophenone. In 2019, there were 2999 SPF products with octocrylene sold in the US. The efficacy of octocrylene as a benzophenone generator in SPF or other consumer products should be swiftly assessed by regulatory bodies ( Fig. 4b ). 26 The frequent detection of traces of benzophenone-1 (7) in recreational and environmental waterways has raised public concern. Its sensitivity to lingering chlorine and its ability to cause endocrine disruption as a result is unclear. They looked into the chlorination content of BP-1 in water from swimming pools and assessed the impact on the human androgen receptor's (AR) endocrine system. Mass spectrometry and NMR correlation spectroscopy were used to distinguish between and describe the mono- and dichlorinated product structures. In yeast two-hybrid experiments, it demonstrated noticeably more antiandrogenic efficacy compared to BP-1 (12.89 μM). Although increased hydrophobic interactions are primarily responsible for improved affinity for binding between chlorine-based products and the AR ligand binding domain, the second form of chloride in P2 still impairs the complex motion due to the solvation penalty, according to additional energy calculations. The protein dynamics were shown to be in a long-timescale equilibrium by the 350 ns Gaussian accelerated molecular dynamics simulations. According to the concentration addition model, the combination of BP-1, P1, and P2 triggered additive antiandrogenic action. NKX3.1 and KLK3 are AR-regulated genes, and P1 and P2 at 1 μM reduced their mRNA expression by 1.7–9.1-fold in androgenactivated LNCaP cells. Because residual chlorine in aquatic settings naturally chlorinates BP-1 findings on increased antiandrogenic activity and disrupted AR signaling offered proof connecting the use of personal care items with possible health problems ( Fig. 5a ). 27 However, the UV filter components in many sunscreen lotions include benzophenone-8 (BP-8) and benzophenone-3 (8, BP-3). In the adipogenesis model using the bone marrow of human mesenchymal stem cells (hBM-MSCs), the long-wave UV A filter avobenzone's obesogenic action was clarified. Due to the chemical similarities between BP-3 and BP-8 and avobenzone, the obesogenic potentials of these compounds were examined in this work. More effectively than avobenzone, BP-3 and BP-8 stimulated the release of adiponectin during adipogenesis in hBM-MSCs. Both BP-3 and BP-8 are directly attached to the peroxisome proliferator-activated receptorγ (PPARγ) during target identification, which was accompanied by the recruitment of the steroid receptor coactivator-2 (SRC-2). While BP-8 was a partial PPARγ agonist, BP-3 worked as a complete PPARγ agonist. In addition, human epidermal keratinocytes, a key target of UV filters in human skin, greatly boosted the gene transcription of PPARα, PPARγ, and important lipid metabolism-associated enzymes. They are obesogenic environmental substances like organotins, phthalates, and bisphenols ( Fig. 5b ). 28 The UV filters of the benzophenone (9) class are estrogenic substances that are widely utilized in sunscreen products, raising concerns about human exposure. In 50 items from 44 brands that were offered in the United States in 2021, 14 BP UV filters were tested to measure exposure to BP derivatives in sunscreens. It was found in around ≥70% of the samples. The 50 items had a geometric mean (GM) concentration of 6600 ng g −1 for the total of these BPs (∑ 14 BPs). Its content in oxybenzone-containing goods was 5–6 orders of magnitude greater than in “oxybenzone-free” items, making it the predominant BP in those products. Even those goods with the label “oxybenzone-free” had it in greater than 90% of the samples tested. Octocrylene-containing goods had concentrations that were around 100 times greater than “octocrylene-free” products (GM: 15 900 vs. 151 ng g −1 ). Dermal exposure dosages of BP-3 from goods containing oxybenzone (GM: 4 140 000 ng per kg body weight (BW) per day) and BP from certain (24%) items containing octocrylene (GM: 12 200 ng per kg BW per day) were above reference levels (2 000 000 and 30 000 ng per kg BW per day for BP-3 and BP, respectively). This study shows that BP and BP-3 concentrations in sunscreen creams vary considerably and may be significant even in items marked as being free of oxybenzone or octocrylene, raising ongoing concerns about dermal exposure ( Fig. 6a ). 29 Thin-layer chromatography was used to extract benzophenone-4 (10, BZ4) from hair shampoo's surfactants, colors, preservatives, and other ingredients. The stationary phase was silica gel 60, while the mobile phase was an ethyl acetate–ethanol–water-pH 6 phosphate buffer. At 285 nm, chromatograms were scanned using densitometry. BZ4's densitometric calibration curve has a nonlinear shape and an R > 0.999 value. Approximately 0.03 and ca. 0.1 μg per spot, respectively, served as the detection and quantification limits. The outcomes of UV spectrophotometry using the zero and second derivatives were contrasted with those of HPTLC-densitometry. Calibration curves for spectrophotometric techniques were linear with R > 0.9998. The chromatographic technique received full validation ( Fig. 6b ). 30 Potential therapeutic candidate 7- epi -clusianone (11, 7-EPI) is a naturally occurring prenylated benzophenone that is isolated from the fruits of Garcinia brasiliensis . A designed and approved stability-indicating technique by LC-UV was used to assess the benzophenone's intrinsic stability. One significant oxidative degradation product was found, and 7-EPI degradation under forced oxidation followed first-order kinetics. Following a reaction in a Baeyer–Villiger-type method, this novel compound's structural elucidation revealed that one atom of oxygen was stabilized by a resonant between two carbonyl moieties. The prenylated benzophenone, found as 7- epi -oxi-clusianone, may be investigated as a possible therapeutic candidate or sunscreen ingredient ( Fig. 7a ). 31 A class of compounds known as benzophenone (12)-type UV filters are frequently employed in sunscreen products to stop UV radiation from damaging human skin. They have also been investigated as endocrine disruptors, hepatotoxic, and pneumotoxic toxicants in vitro and in vivo . Large cities in China were the focus of research on human exposure to BPs, whereas rural regions were disregarded. In Guangdong Province, China, this study evaluated and compared the urine concentrations of five BPs. Additionally investigated were the correlation patterns and composition profiles of various BPs. They recommended high levels of BP-3 and 4-OH-BP exposure in rural regions. The concentrations of urine BP-1 and 4-OH-BP showed significant positive associations, as did those between urinary BP-1 and BP-3. This study addressed several crucial issues for estimating human exposure and gave crucial data for calculating the health hazards and BP exposure for rural residents ( Fig. 7b ). 32 The highly regioselective [2 + 2 + 2] benzannulation of 3-formylchromones with β-enamino esters, indium( iii )-catalyzed synthesis of various and functionalized 2-hydroxybenzophenone derivatives (13), excellent to good yields were produced. A domino Michael/retro-Michael/6π-electrocyclization/deformylation reaction drives the progression of this benzannulation process. Additionally, 3-substituted chromen-4-ones and β-enamino esters were combined to form 2-hydroxybenzophenones by a [4 + 2] benzannulation process that was catalyzed by indium( iii ). In addition, the properties of the UV-vis spectrum of produced 2-hydroxybenzophenones were studied about substituents and π conjugation. Compared to the most used sunscreen ingredient, oxybenzone, it demonstrated greater UV protection activity ( Fig. 8a ). 33 ZnO NPs, on the other hand, were created specifically to entrap Bp-3 (14) and demonstrated recurrent on-demand release, encapsulation, and UV radiation sensitivity as well as minimal cytotoxicity to skin cells. Potential sunscreen uses for the Bp-3-loaded ZnO NPs exist ( Fig. 8b ). 34 The 3-formylchromones and, α,β-unsaturated aldehydes as the starting materials, they created an environmentally friendly organocatalyst-controlled technique for the highly selective synthesis of polyfunctionalized 2-hydroxybenzophenone frameworks (15), which include 2-hydroxy-3′-formylbenzophenones. The unique procedure makes use of organocatalysts that are safe for the environment, easily accessible, affordable, non-toxic, and operationally straightforward. The newly created compounds were effectively used in C–H alkenylation and alkylation processes to create novel and intriguing materials for biology. Comparing the created molecules to the widely accessible sunscreen component oxybenzone, they demonstrated higher photoprotective characteristics ( Fig. 9a ). 35 Since benzophenones (16) are efficient UVA and UVB filters, they are often utilized in industry. In Europe, sunscreen products are required to contain benzophenone-3, commonly in combination with additional filters like octocrylene. They must be monitored since UV light can make them mutagenic, and octocrylene may turn into BPs. To separate and identify BPs in sunscreen products with possible outcomes, liquid–liquid extraction was then followed by direct-immersion microextraction in the solid phase (LLE-DI-SPME). The most efficient SPME fiber was found to be polyacrylate fiber after factors such as extraction solvent, pH, adsorption, desorption duration, stirring, sating effect, and the presence of organic solvents were adjusted. Gas chromatography-mass spectrometry was used for detection and quantification. The linear range ranged from 0.16 to 2000 μg kg −1 , whereas the analytical parameters' limits of detection were 0.05 to 0.10 μg kg −1 . The method's recovery varied from 83 to 103%, and its precision of 3.2 to 18.7% relative standard deviation (RSD) was good without showing much of a matrix impact. The DI-SPME approach was challenging and the samples were complicated, but the method held up well. The suggested approach effectively identified 10 BPs in 6 separate sunscreen creams. Sunscreens included a total of 165 to 931 mg kg −1 of BPs, with BP-3 being found in all samples at levels ranging from 4.2 to 740 mg kg −1 ( Fig. 9b ). 36 Across all age categories, sexes, and racial/ethnic groupings, there is a positive correlation between self-reported frequency of sunscreen usage and urine BP-3. Although these findings indicate a significant relationship between self-reported use and the BP-3 (17) biomarker for actual sunscreen utilization, more research will be required to determine whether it is possible to enhance the biomarker for actual use as well as validate self-reported sunscreen use through more specific questions about the amount of sunscreen used, the number of days used each week, frequently sunscreen is applied again during the day, and the typical SPF used ( Fig. 10a ). 37 Thermal analysis and PAS revealed that BZ-3 (18) and HPCD would be complex in a 2 : 1 stoichiometric ratio. Histological examination revealed no tissue reactivity when formulations containing the complex were used. Sunscreen penetration was similarly minimal according to PAS, which is consistent with the outcomes of using PAS. As a result, it is advantageous to employ the BZ-3-HPCD complex in sunscreen compositions. Additionally, PAS is a technology that may be used with other, more traditional methods to examine CDs create inclusion complexes, and deeply the complexes penetrate the skin. Given these findings, it can be concluded that the formulation containing the complex BZ-3-HPCD is a good option for enhancing the effectiveness of sunscreen compositions and that PAS may be a valuable method for assessing the UV sensitivity of these formulations ( Fig. 10b ). 38 Benzophenone-based sunscreens offer broad-spectrum protection against UVA and UVB rays, making them effective at preventing sunburn and skin damage. However, some benzophenones have been associated with potential hormone-disrupting effects and environmental concerns, leading to their restricted use in certain regions and formulations. Flavanoid based sunscreen The phytocosmetic (19) sunscreen emulsion has antioxidant properties and a combination of plant extracts high in flavonoids. Sun protection elements, antioxidant activity, skin sensitivity, photostability, cutaneous permeability, and flavonoid retention were assessed in vitro . Following the loading of the extract mixture, thermodynamically stable emulsions were produced and examined for sensory analysis. When kept at low temperatures, the emulsion was stable; nevertheless, after 120 days, the concentrations of quercetin and rutin, which were 2.8 ± 0.39 μg mL −1 and 30.39 ± 0.39 μg mL −1 , respectively, were over their limits of quantification. A standard topical product was found to have equal spreadability, low rupture strength, and adhesiveness. The discovered phytocosmetic sunscreen also showed higher pseudo-plastic, viscoelastic, and brittleness characteristics. The product demonstrated an essential wavelength of 387.0 nm and an UV rays both A and B (UVA/UVB) efficiency of 0.78, demonstrating that the produced formulation has UVA/UVB protection capability, defending skin against UV radiation-related damage. Rutin was demonstrated to pass through the skin's physical barrier and measured in the stratum corneum (3.27 ± 1.92 μg mL −1 ) by a tape stripping and retention test (114.68 ± 8.70 μg mL −1 ). By using an in vitro assay, the developed flavonoid-enriched phytocosmetic was shown to be non-irritating to the skin ( Fig. 11a ). 39 The number of natural ingredients used as active agents for sunscreen is constantly growing, and one of them is microalgae, specifically Spirulina plantesis (20), a cyanobacterium that has naturally absorbed UV compounds, particularly flavonoids, in its cells. Due to its capacity to raise the SPF and absorb the highest wavelength of UV rays, flavonoid has the potential to be employed as an active component in sunscreen. To get the best cream stability and SPF ratings from sunscreen cream formulations, it is varied in the range of 1–10% w/w and the ratio of olive oil to candelilla wax was also adjusted, with values of 10 : 1 and 5 : 1. Based on the results, the total flavonoid compound in the dry and fresh microalgae foundation samples was determined to be 22.10 mg g −1 and 10.91 mg g −1 , respectively. The best sunscreen formulation in this study had 7% (w/w) microalgae extract and a 35 : 7 ratio of olive oil to candelilla wax. This formulation has a strong stability score (17.33 out of 20) and a good SPF rating (29.06), which is classified as ultra-SPF. Because the total microorganisms were still below the necessary total microbial of SNI and did not irritate the skin, the flavonoid-containing sunscreen derived from microalgae extract is safe to use ( Fig. 11b ). 40 Natural phenolic chemicals may be found at low cost in cashew nutshell liquid (21, CNSL), which has a wide range of uses. These synthetic UV filters, which are prevalent in commercialized sunscreen products, contain chromophores with chemical structures identical to those of these phenolic compounds. In this study, the effects of solvents on crude CNSL's yield, total phenol content (TPC), total flavonoid content (TFC), and sun protection factor were examined. Hexane had the lowest yield (30.4 ± 0.7%), whereas ethanol had the greatest (49.3 ± 3.2%), according to the percent yield range. The findings showed that solvent extraction significantly affects the yield and SPF of CNSL. Since it has an excellent TPC and SPF, it could be the optimum solvent for extracting CNSL ( Fig. 12a ). 41 DFT and TD-DFT at the M05-2X/6-311++G(3df, 3p)//M05-2X/6-31+G(d) level of theory have been used to investigate the photoprotective characteristics of two naturally occurring acridone derivatives (22). In the gas phase, water, and pentyle thanoate, three typical pathways defined for the antioxidant characteristics, including H-atom transfer (HAT), proton transfer (PT) towards HOO*/HO* radicals, and single electron transfer (SET) were examined. According to the DFT results, both compounds effectively scavenge HOO* and HO* radicals in all mediums using the HAT mechanism. The most preferred reaction for the HO* radical in water is the HAT reaction (Δ H −37.7 kcal mol −1 ). Additionally, TD-DFT was used to clarify the examined compound's effective UV-absorption capability. All substances can absorb UV rays in the 200–335 nm range, with the simplest excitations occurring between 334 and 332 nm and the highest absorptions occurring between 234 and 227 nm. The equivalent UV-absorption is given the HOMO to LUMO and HOMO-3 to LUMO (π–π*) transitions ( Fig. 12b ). 42 Two to eight percent of the naturally occurring antioxidant flavonoids (23) are added. Sun protection factor measurement and connection with antioxidant activity have been used to monitor the photoprotective properties of creams (three kinds of TiO 2 NPs (UV-Titan M161 and M212 and M170) produced by KEMIRA) in vitro . During the irradiation process, the mixtures of TiO 2 + flavonoids produce the amplification of the SPF value because of the photocatalytic impact of the TiO 2 pigment and bis-ethylhexyloxiphenol-metoxiphenyltriazine (BEMT), which is tested into the collagen base. Both UVA and UVB photoprotection are provided by the cream's combination of TiO 2 , BEMT, and flavonoids ( Fig. 13a ). 43 Additionally, Elaeocarpus floribundus (24) blume leaves have long been utilized as a remedy for several illnesses; its hot water infusion is used as a gargle to heal sore gums and alleviate rheumatoid arthritis. Furthermore, it has demonstrated that the total phenolic content of its methanol extract is very high. Comparing the several extracts, the E. floribundus blume leaf methanol extract showed the highest amount of TFC and the greatest potential for sunscreen action ( Fig. 13b ). 44 To assess the relationship between the amount of phenol and flavonoid content and antioxidant activity and the sun protection factor. According to this study, there is a link between SPF and phenolic and flavonoid concentration. Sunscreen compositions can employ the adequate SPF provided by an ultrasonically aided extract of C. melo leaf (25) ( Fig. 14a ). 45 Moreover, a natural UV filter may be employed in the formulation as a single broad-spectrum sunscreen from the Lippia species (26). To calculate their UVB protection factor, the UV transmission of sunscreens generated from four distinct Lippia species was first measured. Next, using diffuse transmittance spectroscopy, the in vivo SPF as well as the in vitro UVA radiation shielding factor (UVAPF) of the extract from the species with the best results were determined. The in vitro SPF ratings for the natural sunscreens ranged from 1.7 to 7.6. The L. sericea species provided the highest SPF; when employed as a single UV filter in a lotion, it had an in vivo SPF of 7.5 and a UVAPF of 2.97. It was discovered that the plant's overall polyphenolic content, rather than its flavonoid or antioxidant capacity, is what gives this sunscreen its photoprotective properties. As a consequence, the findings of this study showed that L. sericea 's natural sunscreen may someday find commercial utility ( Fig. 14b ). 46 In addition, flavonoids and related phenylpropanoids, which accumulate as ultraviolet-absorbing compounds (27) and cause a decrease in the epidermal UV transmittance (TUV), are the main defenses used by plants against potentially harmful solar UV radiation. These compounds are also essential parts of the overall acclimation reaction of plants to shifting solar UV environments. It is entirely unknown if plants can modify their UV sunscreen defense in reaction to abrupt variations in UV, which happen on a daily basis. The demonstrate that plants may modify their UV-screening characteristics between minutes to hours and that UV radiation is a portion of what causes these changes. Large (30–50%) and reversible alterations in TUV occurred during the day for the domesticated species Abelmoschus esculentus , and these modifications were linked to variations in the amounts of various quercetin glycosides and whole-leaf UV-absorbing compounds. Similar findings were found for two more species ( Vicia faba and Solanum lycopersicum ), however, Zea mays showed no such alterations. These findings have practical implications for using UV to increase crop vigor as well as quality in controlled environments. They also raise important questions about the expenses and advantages of UV-protection strategies in plants ( Fig. 15a ). 47 The ultraviolet-absorbing substances (28) (flavonoids and related phenylpropanoids) in higher plants' epidermis reduce the diffusion of solar UV radiation to underlying tissues and serve as a key mechanism of acclimation to shifting UV conditions brought on by ozone depletion and climate change. A gradient of ambient solar ultraviolet and climate is represented by the screening species of diverse wild and cultivated plants growing in four different places. Non-destructive studies of adaxial TUV revealed that there was significant interspecific heterogeneity in the amplitude of these changes and that midday declines in TUV occurred in 49% of the species studied, encompassing both herbaceous and woody growth types. Overall, Louisiana plants showed more diurnal fluctuations in TUV than plants in the other locales. The extent of these alterations was also strongly linked with the lowest daily air temperatures across all taxa, but not with daily UV irradiances. The findings show that diurnal variations in UV shielding are common in higher plants, vary across and within species, and are often largest in herbaceous plants that thrive in warm climates. These findings imply that plant species have different “strategies” for protecting themselves from UV radiation, albeit it is still unknown the functional and ecological implications of these differences in UV sunscreen protection ( Fig. 15b ). 48 The creation and investigation of three flavonoid (29) sunscreen formulations was successful. It was discovered sunscreen chemicals undergo excited state intramolecular proton transfer using steady-state spectroscopy and time-dependent density functional theory. The estimated UV-vis absorption spectra and fluorescence emission spectra accord well with the outcomes of the methanol solution experiments. The potential energy curve shows that the three sunscreen chemicals lack of an energy barrier makes the ESIPT procedure simple to carry out. Therefore, the excitation energy that was absorbed might return to the ground state via a non-radiative relaxing process. The three flavonoids can function as sunscreens, according to light stability testing. In addition to serving as a theoretical foundation for the creation of new sunscreen compounds, it is a process in sunscreen processes ( Fig. 16a ). 49 The main environmental element that contributes to erythema, inflammation, photoaging, and skin carcinogenesis is exposure to UV radiation. Vicenin-2 (30) is a bioflavonoid that has been identified from a number of therapeutic plants. The impact of vicenin-2 on UVB-linked oxidative stress and photoaging signaling in human dermal fibroblasts (HDF). HDF cells went into apoptosis as a result of UVB-irradiation's markedly increased levels of intracellular ROS, lipid peroxidation, DNA damage, and antioxidant depletion. Intriguingly, vicenin-2 was administered to HDF cells 1 hour before UVB exposure to suppress the production of ROS, TBARS, apoptosis, and DNA damage. Oxidative stress and photoaging are associated with MAPKs and MMP signaling, which are thought of as photoaging and differentiation. It stops the overexpression of MAPKs and MMPs in HDF cells upon UVB exposure. Due to its sunscreen-like qualities, it could be a potential bioactive element to absorb UV photons and shield the skin cells from UVB-related oxidative stress and photoaging signals ( Fig. 16b ). 50 Flavonoid-based sunscreens offer the advantage of being natural compounds with antioxidant properties, potentially providing additional skin benefits beyond UV protection. However, their effectiveness as broad-spectrum UV blockers and their stability in sunscreen formulations may be limited compared to synthetic UV filters, which could impact their overall sun protection capabilities. Polymeric nanoparticles-based sunscreen Benzophenone-3 is to be carried by polymeric nanoparticles (31) that have been prepared and analyzed. By raising the sun protection factor lowering BZ3 penetration into the skin, and lowering BZ3 levels in sunscreen formulation, sunscreen products can be made safer. By using the heated high-pressure homogenization process, BZ3 has been embedded in solid lipid nanoparticles (SLN) and poly(epsilon-caprolactone) (PCL) nanoparticles via the nanoprecipitation method. For forty days, the particles remained steady. Compared to BZ3 enclosed in SLN, BZ3 encapsulated with PCL nanoparticles was released more quickly. The encapsulation of BZ3 in both nanostructures improved the sun protection factor. However compared to SLN-BZ3, PCL nanoparticles containing BZ3 decreased its skin penetration more. Additionally, BZ3 in SLN did not exhibit any cytotoxic or phototoxic effects on BABL/c 3T3 fibroblasts or human keratinocytes (HaCaT cells). However, PCL nanoparticles containing BZ3 indicated the potential for phototoxicity in HaCaT cells. Despite this, mice did not develop allergic reactions to BZ3, whether it was present in free form or enclosed in PCL nanoparticles or SLN. The findings imply that these nanostructures could make intriguing sunscreen carriers ( Fig. 17a ). 51 A new method based on electron irradiation of polymethyl methacrylate and polystyrene (PMMA-PS NPs) (32) is given for creating non-toxic active ingredients for sunscreens. 52 Under electron radiation, aromatic rings in PS and conjugated aliphatic C <svg xmlns="http://www.w3.org/2000/svg" version="1.0" width="13.200000pt" height="16.000000pt" viewBox="0 0 13.200000 16.000000" preserveAspectRatio="xMidYMid meet"><metadata> Created by potrace 1.16, written by Peter Selinger 2001-2019 </metadata><g transform="translate(1.000000,15.000000) scale(0.017500,-0.017500)" fill="currentColor" stroke="none"><path d="M0 440 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z M0 280 l0 -40 320 0 320 0 0 40 0 40 -320 0 -320 0 0 -40z"/></g></svg> C rings in PMMA are formed, imparting UV-absorbing properties to the polymers. The extent of conjugation increases with higher electron fluence, leading to a redshift in the absorption spectra. The bombarded polymer NPs' in vitro SPF and PA values demonstrate their strong photostability and remarkable UV-absorbing capabilities throughout a wide UV spectrum. Based on OECD TG 432, the irradiated polymer NPs show no discernible evidence of cytotoxicity or phototoxicity and are categorized as nonphototoxic compounds. The electron irradiation process enables the large-scale production of non-toxic, UV-absorbing nanoparticles. Consequently, this method provides a valuable means of developing safe sunscreen ingredients as alternatives to current compounds that pose safety issues. Furthermore, the technique can be employed to manufacture photoprotective personal care items, UV-resistant textiles, coatings with UV protection, and filters for blue light ( Fig. 17b ). Furthermore, one of the most hazardous things that can damage skin is UV radiation. The development of sunscreens that effectively shield skin from overexposure to UV radiation is constantly progressing. Phenylbenzimidazole-5-sulfonic acid (PBSA) is typically employed as a sunblocking agent; nevertheless, it has the drawback of photodegrading and potentially damaging cells. To create a carrier polymer with unique and powerful capabilities, PBSA was first encapsulated into niosomes nanoparticles (33) and subsequently coated with chitosan- aloe vera (CS-nio-aloe/PBSA). The breakdown of PBSA and epidermal penetration are regulated by this polymer. Fourier transform infrared spectroscopy, scanning electron microscopy, transmission electron microscopy, and dynamic light scattering were used to characterize the CS-nio-aloe/PBSA polymer nanoparticles. Mice skin was used to measure the epidermal transparency of coated PBSA and investigate the carrier polymer release rate in vitro. The sunscreen-containing nanoparticle polymer was successfully produced, exhibiting an 80% encapsulation efficiency. The skin's surface was entirely covered in the formulation (CS-nio-aloe/PBSA). This bolsters its application as a skin protector, and its nanostructures prolong the release of PBSA. Improved cellular preservation, UV protection, management of free PBSA, and restricted penetration into the mouse skin epidermis may all be possible with PBSA encapsulated within CS-nio-aloe nanoparticles ( Fig. 18a ). 53 Chemical sunscreens such as octyl methoxycinnamate (34, OMC) are frequently used in sunscreen cosmetics. On the other hand, hazards such as skin-photosensitive responses might arise from direct skin contact. An intriguing way to improve the photostability of filters is to encase ultraviolet filters in microcapsules. In order to create synergistic sunscreen microcapsules using sophisticated freezing technology, sodium caseinate (SC) and arabic gum (GA) were used as the wall materials. The impact of pH, wall substance concentration, and wall/core ratio in the development of OMC microcapsules has been studied through many studies. The OMC microcapsules' shape, composition, and stability are assessed using TGA, FTIR, and SEM. The OMC microcapsule exhibits a smooth surface shape, consistent size distribution, and strong heat stability. The findings demonstrate that, for UV-B (280–320 nm), the OMC microcapsules' absorption of UV is superior to that of the uncoated OMC. Furthermore, in twelve hours, the OMC microcapsule released 40% and OMC released 65%; nevertheless, the OMC microcapsule sunscreen has a sun protection factor that is 18.75% greater than OMC's. The hydrophobic connection between SC and OMC and the electrostatic bond between SC and GA may be responsible for this occurrence ( Fig. 18b ). 54 Solvent displacement was used to manufacture biodegradable polymer nanocapsules (35) that contained the lipophilic sunscreen Parsol MCX (OMC) as the oil core. To look into the formulations' photoprotective potential, and OMC loading ability, with stabilizing agent effects (polysorbate 85, P-85, and poloxamer 188, P-188). The fast diffusion of the solvent across the contact is likely what causes the surface instability that leads to the creation of nanocapsules. It was determined that the stabilizing agents' capacity to prevent coalescence during solvent diffusion was what made them successful. P-85 outperformed P-188 as a poly(ε-caprolactone) nanocapsule stabilizer. OMC had a large loading capacity. The high lipophilicity of the medication and the hydrophobicity and crystallinity of the polymer control the in vitro release of OMC-nanocapsules. The OMC nanocapsules offer significantly better partial protection against UV-induced erythema when compared to a conventional gel, as illustrated in ( Fig. 19a ). 51 Through an experimental design approach, Na-lignosulfonates (36) have been appropriately reevaluated as ideal pairings with two meticulously selected and well-explained safe filters. The demonstration indicates the potential to mitigate risks to the marine environment and human health by providing a range of photostable, non-cytotoxic emulsions spanning from SPF15 to SPF50, all available upon request. These emulsions contain only 5% LiS and minimal concentrations of organic filters. The absence of an ideal solar filter arises from the often unclear mechanisms governing microbial breakdown and accumulation in ecosystems. Prolonged exposure to the sun, even with a high SPF sunscreen, doesn't guarantee protection from serious skin damage. For instance, by adjusting the adsorption characteristics at the solid–liquid interface and optimizing the structure of micellar sunscreens, it might be possible to reduce the levels of BEMT and DHHB. As an additional starting point, 10% middle-size OLV lignin colloidal spheres have increased the sun protection factor (value of sunscreen with only 2% organic filters, from 10.7 to 47.7). The SPF values found in vitro in this study are accurate; however, because the anti-inflammatory properties of BEMT and DHHB need the measurement of SPF in vivo for marketing (packing) and legal purposes, the final assessment will be inflated ( Fig. 19b ). 56 The use of sunscreen is advised to shield human beings from UV rays that can cause harm and the onset of cancer. However, due to their small molecular weights, small-molecule organic UV filters included in sunscreens harm the ecosystem and could endanger users' health through transdermal action. An approach for combining the Biginelli reaction and free radical polymerization to create polymeric (37) UV screens that are secure and coral-friendly. They have created a polymer that is water-soluble with exceptional UV absorption that effectively shields mice from skin burns caused by UV radiation much better than renowned UV filters and over-the-counter sunscreens. Despite its high molecular weight, this polymer cannot be applied topically and is almost nontoxic to mice, algae, and corals. Understanding to create a bio- and coral-friendly polymeric UV filter using a straightforward multicomponent reaction will help in the creation of functional polymers with additional value for real-world applications ( Fig. 20a ). 57 Paints, sunscreens, cosmetics, food, and other consumer goods all include titanium dioxide nanoparticles. Whenever dispersed into the environment, stabilizing substances found in these goods may change the fate of nTiO 2 . nTiO 2 transport and deposition behavior in porous media as a result of the actions of TEGO carbomer (38), a polymeric stabilizing ingredient used in sunscreen. Columns filled with Federal Fine Ottawa sand were submerged in aqueous nTiO 2 solutions at pH 5.0 or 7.5 ± 0.2. At pH 5, which is within the predicted point of zero charge (PZC) of nTiO 2 (pH 6.3), nTiO 2 was not found in effluent samples in the absence of carbomer, but more than 80% of nTiO 2 was seen to elute at pH 7.5. The elution of nTiO 2 was greater than 94% at pH 5 and 7.5 after the addition of 3 mg L −1 carbomer, which reduced the PZC from 6.3 to less than 5. The column breakthrough and retention data were captured using a nanoparticle transport model that included a first-order, maximal retention capacity term. According to model outcomes, regardless of changes in solution chemistry, the addition of carbomer decreased the typical solid phase retention capacity from 3.40 to 1.10 g TiO 2 per g sand. These results show that polymeric stabilizing agents can significantly affect nTiO 2 destiny in porous media, potentially increasing nTiO 2 mobility in the surroundings and decreasing nTiO 2 filtration system efficacy ( Fig. 20b ). 58 Human skin fibroblasts are used to investigate the biocompatibility and sunscreen performance of a new sunscreen that simultaneously encapsulates zinc oxide nanoparticles and octocrylene in poly-styrene- co -methyl methacrylate (39, PMMA/PS) nanoparticles by the use of mini emulsion polymerization. PMMA/PS nanoparticles with excellent encapsulation efficiency and positive physical–chemical characteristics were effectively used to encapsulate both organic and inorganic filters for use in sunscreens. After being added to Artistoflex AVC gel, the nanoparticles produced a semi-solid formulation that was white, had a pH that was similar to the skin's pH and was homogenous in all respects. The semi-solid product with ZnO and octocrylene in PMMA nanoparticles demonstrated a good sun protection factor (SPF > 30), earning a 4-star rating from the Boots Star Rating System and being regarded as a good UVA sunscreen( Fig. 21a ). 59 Simply combining UV filters with aqueous cross-linkable PDMS coatings has allowed for the effective creation of PDMS-based skin sunscreens (40). Three kinds of sunscreens, PI, PO, and POI, were made using Mg/Al + Fe LDHs and an organic UV absorber, both in combination and independently. By using the hydrosilylation method at room temperature, all sunscreens can transform into transparent elastic films with good UV protection throughout the whole UV range (200–400 nm), skin analog mechanical performance, high WVT, and moderate adhesion strength. In the meantime, the films' WVT rate and mechanical strength may be improved by Mg/Al + Fe LDHs. However, especially at high UV, the organic UV absorber may reduce the mechanical strength and cause the PO films' surface to become greasy or even sticky. The oily feeling of PO films might be effectively eliminated by adding a tiny amount of Mg/Al + Fe LDHs. It's interesting to note that the POI sunscreen, which has 2.08 weight percent organic UV absorber and 0.69 weight percent Mg/Al + Fe LDHs, showed a sun-shielding performance that was on par with the high SPF commercial sunscreens. Such a modest UV filter concentration effectively mitigates sunscreen safety concerns ( Fig. 21b ). 60 Polymeric nanoparticles in sunscreens offer enhanced stability and improved UV protection due to their ability to encapsulate UV filters, but there are concerns about their potential penetration into the skin and the environment, which requires further research and regulation to ensure their safety and environmental impact. Metal oxide nanoparticles based sunscreen In cosmetics, titanium dioxide nanoparticles (41) are often utilized. It's notably present in sunscreens because of its ability to absorb UV radiation. Their biocompatibility is still debatable, though. In particular, both in vitro and in vivo studies have been done on Degussa P25 (P25TiO 2 NPs) exposed to solar-simulated radiation. Following a 6 hours exposure to P25TiO 2 NPs and light the integrity of tissues and cell viability were impacted with TEM providing evidence of decreased tissue quality along with potent oxidative stress indicators. A novel biocompatible substitute based on the fast sol–gel functionalization of titanium dioxide nanoparticles with vitamin B2 has been developed to prevent these undesirable consequences. These nanoparticles with functional properties did not exhibit any of the phototoxicity effects ( Fig. 22a ). 61 Additionally, the particles of a size determined in nanometers are present in sunblock based on zinc oxide (42). In this investigation, nanoparticles were found across four commercial sunscreens. The sunscreen-derived nanoparticles exhibited diverse morphologies, aspect ratios, and broad size ranges. When examining the size of the particles and charge on the surface of each nanoparticle, accumulation in their behavior was seen at different time intervals. Additionally, the characteristics of the nanoparticle content in the extraction and bought materials were compared with those of the genuine wastewater samples. According to the comparison, iron particles and co-contaminants with other organic components were found to be the second most common composition of nanoparticles found in wastewater samples, behind zinc, titanium, and silver elements. This experiment demonstrated the different morphologies of nanoparticles isolated from sunscreens and wastewater. Changes in the morphology, shape, and size of nanoparticles resulting from interactions with substances present in wastewater, seawater, or surface water have the potential to modify their negative effects on aquatic organisms. These impacts may differ from those observed when using pristine nanoparticles ( Fig. 22b ). 62 Additionally, an intra-laboratory evaluation was conducted to determine the efficacy of a technique for identifying TiO 2 -engineered nanoparticles (43) that are found in sunscreen that contains both upper nanometer-range and nanoscale TiO 2 in addition to iron oxide particles. To produce the measurement errors related to the mass-based particle size distribution calculation with quantitative asymmetrical flow field-flow fractionation (AF4) calculation of the hydrodynamic radius, three duplicate measurements were conducted over five different days. The analysis of TiO 2 ENPs found in sunscreen using AF4 separation-multi detection yields quantitative results with uncertainty based on the accuracy of 3.9–8.8%. As a result, this approach can be regarded as having excellent accuracy. Lastly, the bias data indicates that the lack of a sunscreen standard comprising certified TiO 2 ENPs means that the accuracy of the approach ( u t = 5.5–52%) can only be considered as a proxy ( Fig. 23a ). 63 The most prevalent active components in plenty of commercial goods, including sunscreen, are nanoparticles (44). Therefore, it is essential to accurately characterize the nanoparticles present in these items to improve product design and comprehend the potential toxicological effects of the nanoparticles. While bulk methods may provide some helpful information, they frequently are unable to distinguish individual particles; as a result, high-resolution nanoparticle characterization is frequently achieved via electron microscopy. Still, unique in situ techniques must be employed because the traditional high vacuum dry TEM does not correctly portray nanoparticle dispersions. The researchers employ a range of methodologies, such as liquid cell transmission electron microscopy, cryogenic (cryo)-TEM, and cryoscanning electron microscopy to examine a commercial sunscreen that incorporates zinc oxide and titanium dioxide. Sample preparation is not necessary for LCTEM analysis since it can detect ZnO dissolution by the use of merely TiO 2 nanoparticles due to beam artifacts. In contrast, ZnO and TiO 2 may be characterized using cryo-TEM, but only pure products (without dilution) can be analyzed using cryo-SEM, which biases the characterization towards the higher proportion of agglomerates and nanoparticles. Ultimately, to ensure efficient and secure product design and production, a precise characterization of market items can only be done using a mix of several in situ EM methods ( Fig. 23b ). 64 A customized solvent emulsification approach was utilized to effectively produce the SLNs (45), and the resulting SLN dispersion was then added to a cream base for topical administration. To create the solid lipid nanoparticles of the photoprotective plant Aloe vera , glyceryl monostearate was used as the lipid and Tween 80 as the surfactant. The drug's release profile demonstrated enhanced topical withholding of Aloe vera for an extended amount of time. The lotion was found to have an outstanding factor for sun protection and increased photoprotective activity. The sunscreen formulation underwent a skin irritation test, and the results revealed no indications of hypersensitive reactions or discomfort. Because metallic compounds are removed from sunscreen formulations, the shielding effect of herbal nanoformulations opens the door for the future use of solid lipid nanoparticles of botanical powders and extracts in cream to obtain additional beneficial advantages with little toxicity ( Fig. 24a ). 65 Fucoxanthin (46) is a naturally occurring carotenoid that is considered bioactive. Although fucoxanthin's polyunsaturated structure makes it physiochemically unstable to heat and acid, it is known for its protection against UV-B-induced cell destruction in hairless mice. This means that fucoxanthin has a low bioavailability, which restricts its use in the cosmetics industry. Systems of solid lipid nanoparticles are recognized for their suitability as sunscreen agent carriers. In this study, the sunburn protection factors of a macroemulsion as well as an SLN formula containing different types of sunscreen agents, respectively, were compared to assess the sunscreen-boosting effect of SLN as a carrier of functional ingredient, particularly fucoxanthin. Particle size, DSC analysis, X-ray analysis, stability test, and other results indicate the fact that SLN formula loading fucoxanthin may be a stable and highly effective ingredient delivery system. Additionally, compared to other formulas, the SLN recipe has demonstrated a greater SPF rating, indicating that it has a good sunscreen-boosting impact. The study suggests that the utilization of SLN as a carrier could improve the bioavailability of fucoxanthin and enable the manufacturing of sunscreen products at a larger scale ( Fig. 24b ). 66 The UV absorbers made of lignin were used to create safer bio-based sunscreens (47). Sunscreen UV absorbers with UVB-SPF and UVA–UVB transmittance values that are higher than those of conventional kraft lignins were the so-called CatLignins-partially demethylated and otherwise changed kraft lignins with an abundance of phenolic hydroxyl auxochromes and catechol units. Making lignins into nanoparticles greatly improved the performance of sunscreens. The UV transmittance of the best lignin sunscreen, which contained woody CatLignin nanoparticles, was just 0.5–3.8% throughout the UVA–UVB area, as opposed to 2.7–51.1% for a commercial SPF 15 sunscreen. Sunscreens based on lignin are especially good for applications with dark SPF tints ( Fig. 25a ). 67 In simpler terms, the study looked at zinc oxide nanoparticles (48, nZnO) from three different sunscreens that can leach into human skin. The leaching rate varied among the sunscreens, ranging from 8% to 72%. They then tested the toxicity of these sunscreens on a tiny marine creature called Tigriopus japonicus . The results showed that the sunscreens had different levels of toxicity, with the concentration needed to cause harm varying. The three sunscreens' relative 96 hours median fatal concentrations for T. japonicus were determined to be >5000, 230.6, and 43.0 mg chem per L, which translates to Zn 2+ concentrations of >82.5, 3.2, and 1.2 mg Zn per L. Based on the outcomes of in vivo tests, T. japonicus showed a rise in the expression of antioxidant genes and the generation of reactive oxygen species after being exposed to each sunscreen for ninety-six hours at ecologically realistic doses. It appears that marine life may be in danger from a state of oxidative stress caused by these sunscreens containing nZnO ( Fig. 25b ). 68 To prepare certain free lipid nanostructures appropriate for co-encapsulating UV-A and UV-B filters (BMDBM and OCT), the modified high-s hear homogenization method may be utilized. To co-encapsulate sunscreens, the SLNs (49) made with a 3.5% surfactant concentration using the tween 20/poloxamer/lecithin surfactant system and the NLCs made with the same surfactant mixture plus 3% liquid lipid were selected. The lipid nanoparticles in dispersing and the developed cosmetic formulations have better UV-blocking properties when sunscreens are added. The UV-blocking effectiveness of lipid matrix and organic UV filters is improved by the presence of liquid lipids in NLCs. It is possible to decrease the number of natural filters in the final cosmetic formulation without sacrificing photoprotection or adverse effects by enhancing the UV-blocking performance of sunscreens encapsulated into lipid nanoparticles ( Fig. 26a ). 69 The harmful effects of NPs (50) in sunscreen compositions on D. tertiolecta , a marine microalgae. More significant toxicity was shown in sunscreens including NPs when zinc was also present in combination with TiO 2 in the form of ZnO NPs and Zn stearate. This implies that zinc exerts toxicity in sunscreen formulations to a considerable extent. The primary indicators of the impact were the generation of ROS and DNA damage. As a result, the algae were exposed to NPs isolated from the sunscreens, which had genotoxic effects. Growth suppression was also seen in the sunscreens containing ZnO NPs. The reactions that the NPs isolated from sunscreens displayed were not consistent with those seen in research using industrial nanoparticles. This suggests that the aging process of sunscreens may have an impact on the ultimate toxicity of TiO 2 nanoparticles. When comparing the outcomes of tests conducted with sunscreens and their corresponding nanoparticle extracts, it became evident that the overall sunscreen products induced more pronounced adverse effects. This suggests that components in the sunscreen formulations, aside from nanoparticles, might possess toxic properties. This observation is supported by the toxicity exhibited by sunscreens formulated without nanoparticles. Additionally, there's a possibility that these components may act in synergy with nanoparticles, amplifying their overall impact on biological systems ( Fig. 26b ). 70 TiO 2 nanoparticles (51) in surface waters are mostly sourced from sunscreens. Due to the difference between model nanoparticles often employed in studies and the more complex particles seen in commercial products, the fate and toxicities of these particles have not been completely explored. To offer more realistic nanoparticle samples for the next research, moderate extraction techniques for TiO 2 nanoparticles using sunscreens were examined. Two techniques for removing TiO 2 nanoparticles from sunscreens that use a surfactant solution as the solvent are based on ultrafiltration as well as ultracentrifugation, respectively. Eleven different commercial sunscreens with varying compositions were used to test these strategies. About 5 g of sunscreen can have 250 mg extracted from it in a single day using the ultracentrifugation form. The recovery rates for ultrafiltration and ultracentrifugation were 52–96% and 78–98%, respectively. By employing UV spectrometry to measure the amount of avobenzone in sunscreen extracts, the ultracentrifugation variant's purification effectiveness was found to be high across all tested sunscreens. While size characteristics were similar a substantial degree of variability in particle shape was found using a combination of transmission electron microscopy and dynamic light scattering. The isoelectric points of every sunscreen extract were less than 4.6. All TiO 2 particles were most likely coated, with the majority of them being coated with PDMS and the remaining ones with materials based on Al and Si, according to time-off light secondary ion mass spectrometry. While the geometry of the main nanoparticles was unaffected by the extraction process, they were aggregated inside the sunscreens, according to a comparison of images acquired through cryogenic transmission electron microscopy of the particles inside the sunscreens and removed particles. Ultrasonication could break apart these agglomerates. As a result, compared to model TiO 2 nanoparticles the extracted particles' size, shape, surface charge, and coating might be thought of as more globally significant ( Fig. 27a ). 71 Kraft lignin (52, KL) can be used in a variety of ways. However, KL is difficult to employ in skincare and nanoparticle manufacturing due to its dark color and large size distribution. Using the ultrafiltration membrane fractionation, the paper-making company separated KL, yielding four distinct types of lignin with varying molecular weights. After that, four different varieties of UL were used to self-assemble to create lignin nanoparticles or ULNPs. Low molecular weight lignin, such as ULA, showed good antioxidant qualities (89.47%, 5 mg mL −1 ), a high brightness (ISO% = 7.55), high L * value ( L * = 72.3), and low polydispersity index (PDI = 1.41), according to an analysis of the UL and ULNP properties. The ULNP exhibited a high dispersion in sunscreen and a restricted size distribution (0.8–1.4 m). Sunscreen's sun protection factor value surged from 14.93 to 63.74 when ULNP was applied at a 5% load. Consequently, this study provided a useful method for the full utilization of pulping waste KL ( Fig. 27b ). 72 Products made from snail slime are well-liked and utilized all over the world. As a result, gold nanoparticles were created using snail slime as a novel and alternative method, giving them intriguing features. The major components of the slime were used to design the inorganic metallic core of the 14 ± 6 nm wide hybrid gold nanoparticles (53), which were created using a straightforward, one-pot method. Among their other characteristics, different antioxidant and tyrosinase inhibitory activities were investigated using the DPPH and ABTS and tyrosinase tests, respectively. Positive outcomes allowed for their usage as an intriguing novel multifunctional cosmetic component. However, the photostability of gold nanoparticles, which was studied using a solar simulator lamp, points to their possible usage as a substitute for the inorganic sunscreen components that are often found in conventional cosmetic sunscreen solutions. The presumed Sun Protection Factor was assessed, and values between 0 and 12 were obtained. The research of gold nanoparticles derived from snail slime as a possible multipurpose platform in cosmetics has never been more appealing thanks to the suggested ecologically benign and economically advantageous nanoparticle production methodology, which adheres to the principles of Green Chemistry ( Fig. 28a ). 73 The Layered double hydroxides (54, LDH) are adaptable building blocks for creating cutting-edge materials. Rarely is the necessity to investigate green or sustainable approaches mentioned. In this study, the sun protection factor and antioxidant qualities of LDH composites made with tomato-derived natural ingredients. According to the findings, the composite materials' 11% organic matter concentration is enough to boost their antioxidant capabilities, such as their stronger antioxidant activity towards ABTS + than towards DPPH. The composite additionally reduces the amount of oxygen atmospheric degradation that the Rapidoxy assay could detect, while the SPF revealed that the LDH particles, rather than the organic content are more important for sunscreen protection. The composite of lycopene and LDH particles increases lycopene's hydro-dispersibility and boosts its antioxidant stability, both of which are crucial characteristics for creating cosmetic or dietary components ( Fig. 28b ). 74 In addition, zinc oxide nanoparticles (55) doped with the elements Al & Na metals are produced and characterized in a way that is environmentally conscious to reduce the photocatalytic function of ZnO for use in sunscreen. The reducing agent for the metal-doped zinc oxide materials was extracted from Averrhoa carambola , popularly known as star fruit, and manufactured utilizing the microwave process. Using techniques such as TEM, EDX, SEM, UV-vis spectroscopy, and XRD the effects of metal-ion doping on the crystalline structure, morphology, and optical properties of ZnO were examined. The sunscreen formulations with undoped ZnO, Na-doped ZnO, and Al-doped ZnO NPs had sun protection factors of 10.10, 25.10, and 43.08, respectively. As a result, the SPF of Na/ZnO and Al/ZnO was higher. Furthermore, the produced sunscreens and nanomaterials showed antioxidant properties and were efficient against a variety of bacteria, including Gram-positive as well as Gram-negative. The photocatalytic activity of the undoped ZnO, Na/ZnO, and Al/ZnO NPs were assessed using the methylene blue (MB) degradation method. The results showed that the rates were 66%, 46%, and 38%, respectively. Consequently, ZnO NPs' photocatalytic activity was reduced with Na- and Al-doping because of their structural flaws. Al/ZnO is also a prime choice for a component in sunscreens( Fig. 29a ). 75 Hydrothermal synthesis was used to create TiO 2 @Y 2 O 3 nanoparticles (56) with Y/Ti weight ratios of 5 and 10 wt%. When Y 2 O 3 was added to TiO 2 , it was discovered that, in comparison to pristine TiO 2 (P25), there was less scattering in the visible region and more absorbance in the UVB and short UVA wavelength ranges. Additionally, under both UV and simulated solar irradiation, these composites significantly reduced the photoactivity of TiO 2 (P25), as measured by dye staining. The fundamental process behind this decrease was attributed to Y 2 O 3 's inhibition of free radical production and active charge carrier transfer blockage from the coated layer. Under all test settings, the composite particles showed the greatest HaCaT cell vitality with cell viability rising with surface Y 2 O 3 loading. The composites seemed to support cell survival and proliferation in a concentration- and yttria loading-dependent way when UV light was absent. It is hypothesized that this results from the less active surface-treated particles inducing less oxidative stress. TiO 2 -based nanoparticles were demonstrated to produce opposing effects under simulated solar radiation, with increased cell death and protection against exposure to radiation at low and high radiation levels, respectively. The established UV absorption and photocatalytic efficiency of the examined materials, respectively, were ascribed to these biological reactions. Therefore, TiO 2 @Y 2 O 3 10 wt% showed the best protection and least amount of induced cell death, followed by TiO 2 (P25) and the 5 wt% composite. The yttria-based nanocomposites exhibit enhanced optical characteristics and biocompatibility, along with a decrease in photocatalytic activity. These findings underscore the possible advantages of incorporating these materials into sunscreen formulations ( Fig. 29b ). 76 The nanoparticles are pushing the boundaries in science, technology, medicine, and consumer products, concerns are growing about their potential toxicity to human health and the environment. To assess the toxicity of zinc oxide nanoparticles ZnO NPs (57), researchers compared sunscreen-extracted ZnO NPs with industrial-grade ZnO NPs. They exposed E. coli bacteria to varying concentrations of ZnO NPs for different durations. The analysis revealed that the growth of E. coli and the production of reactive oxygen species (ROS) depended on factors such as particle type, size, and the level and duration of exposure to ZnO NPs. ROS generation was observed to be higher during the growth phase compared to the stationary phase. Notably, industrial ZnO NPs with smaller particle sizes exhibited greater toxicity than those extracted from sunscreen, which had larger particle sizes. The high toxicity of smaller particles with a homogeneous distribution is likely attributed to their size, suggesting a potential underlying cause for their increased harmful effects ( Fig. 30a ). 77 Additionally, lignin nanoparticles (58) function as photostabilizers and sustainable carriers of two widely used UV chemical filters: octyl methoxycinnamate and avobenzone. By employing deionized water as an antisolvent and eco-certified dimethyl the isosorbide for the principal solvent, the chemicals were encapsulated into kraft lignin NPs by nanoprecipitation. Both compounds greatly increased the half-life stability against UV irradiation after encapsulation. Coencapsulating avobenzone and octyl methoxycinnamate with hydroxytyrosol a naturally occurring phenol with antioxidant activity that was recovered from olive oil wastes and was known for its skin-regenerating qualities improved the stabilizing qualities of lignin nanoparticles even further ( Fig. 30b ). 78 Lignin nanoparticles (LNPs) (59) are used in a variety of industrial settings. Although LNPs may be nanoprecipitated quickly and affordably, the process still requires the use of organic solvents that might be dangerous, which makes their widespread application challenging. A scalable method of nanoprecipitation using isopropylidene glycerol and dimethylisosorbide, two environmentally friendly chemical solvents, to produce colloidal lignin nanoparticles (cLNPs). Compared to parent LNPs and bare lignin, cLNPs demonstrated superior UV-absorbing qualities and antioxidant activity, regardless of the experimental setup. Following their application, cLNPs were utilized to create environmentally friendly sunscreen formulations that demonstrated strong UV-shielding activity even in the absence of physical filters (ZnO and TiO 2 ) and artificial boosters (microplastics). Human HaCaT keratinocytes and human cutaneous equivalents were used in biological experiments to show that there was no cytotoxicity or genotoxicity, which is indicative of the best possible defense of the skin against UV-A damage ( Fig. 31a ). 79 A naturally occurring compound derived from Zanthoxylum xanthoxylum is called Sanshool. It plays a crucial role in avoiding photodamage. However, its continued use is restricted by its innate instability and possible skin penetration danger. They devised a method using melanin-like materials to enhance the efficiency and stability of sanshool, resulting in the creation of melanin-sanshool nanoparticles (60). Through boron esterification interactions, the researchers successfully produced melanin-S NPs with consistent sizes, improved stability, effective ultraviolet absorption, and antioxidative capacities in laboratory settings. They then delved into studying the skin permeation, photoprotective activity, and potential mechanisms of melanin-S NPs using both cellular and animal models for skin photodamage. This bioinspired approach of utilizing melanin-based nanomaterials serves as a promising platform for enhancing the crucial properties of naturally occurring functional molecules like sanshool, opening up new possibilities for advanced photoprotective applications ( Fig. 31b ). 80,81 Metal oxide nanoparticles-based sunscreens offer effective protection against both UVA and UVB rays while being less likely to cause skin irritation compared to chemical sunscreens. However, concerns have been raised about the potential environmental impact of metal oxide nanoparticles, particularly in marine ecosystems, and their potential to cause oxidative stress in cells. Abbreviations Sunscreen Products Total Phenol Content Total Flavonoid Content Cashew Nutshell Liquid Sun Protection Factor Ultraviolet Titanium Dioxide Zinc Oxide Powder X-ray Diffraction Thermogravimetric Analysis Differential Thermal Analysis Benzophenones Dissolved Oxygen Reactive Oxygen Species Time-of-Flight Mass Spectrometry Density Functional Theory Dioxybenzone-Pullulan Polymer Dioxybenzone Nuclear Magnetic Resonance Single Electron Transfer Highest Occupied Molecular Orbital Least Unoccupied Molecular Orbital Excited State Intramolecular Proton Transfer Time-Dependent Density Functional Theory Human Dermal Fibroblasts Deoxyribonucleic acid Solid Lipid Nanoparticles Poly(Epsilon-Caprolactone) Phenylbenzimidazole-5-Sulfonic Acid Fourier Transform Infrared Scanning Electron Microscopy Transmission Electron Microscopy Dynamic Light Scattering Nanoparticles Energy Dispersive X-ray Spectroscopy Reactive Oxygen Species Lignin Nanoparticles Conflicts of interest Authors have no conflict of interest in any part of this manuscript. Supplementary Material
M. R. thanks Prof. T. Sasiprabha, Vice-chancellor, Sathyabama Institute of Science and Technology (Deemed to be University), for her encouragement. Biography Dr M. Rajasekar is currently working as Scientist-C/Assistant Professor (Research) in the Centre for Molecular and Nanomedical Sciences, International Research Centre, Sathyabama Institute of Science and Technology (Deemed to be University), Chennai since 2018. He was born in Agatheripattu village, Tiruvannamalai district in Tamil Nadu, India, and his main research interest is in the area of synthetic organic chemistry. He received his BSc degree from the Department of Chemistry, Arignar Anna Govt. Arts College, Cheyyar, affiliated to the University of Madras in 2005 and MSc Degree from the Department of Chemistry, C. Abdul Hakeem College of Arts and Science, Vellore, affiliated to Thiruvalluvar University in 2007. He received his M. Phil degree from the Department of Chemistry, The New College, affiliated to the University of Madras, Chennai, in 2008. He worked as a Visiting Lecturer at the Department of Chemistry, Anna University, Chennai from 26.06.2008 to 30.12.2008 and a PhD degree from the Department of Organic Chemistry, the University of Madras in 2015. He worked as a National Postdoctoral (DST-SERB) at Sathyabama University, Chennai from 2016–2018. He received his BEd (2021) and MEd (2023) degrees from Arunachala College of Education, affiliated with Tamil Nadu Teachers Education University, Chennai. He was awarded NET, SLET, SET, NPDF, DSKPDF, Nehru PDF, Young Scientist, RSC Advances panel reviewer, and National Educational Star Award by “The Glorious Organization for Accelerated to Literacy (GOAL)”, New Delhi. Additionally, he has been granted five patents, published twelve books and has authored two monographs. Biography Ms Jennita Mary was born in 2002, Tamil Nadu, India. She is currently pursuing her undergraduate degree in the School of Bio and Chemical Engineering, Department of Biotechnology, Sathyabama Institute of Science and Technology (Deemed to be University), Chennai, Tamil Nadu, India. She has developed two working products during her 3rd year of her UG Program in the Medical field. She has also published a review paper in the field of Nanotechnology. Currently, she's doing her project under the supervision of Dr M. Rajasekar and Dr Masilamani Selvam. Her research interest generally includes protecting the marine ecosystem and the organic field of cosmetics. Biography Ms Meenambigai S., was born in 2002 in Tamil Nadu, India. She is presently pursuing her undergraduate studies at the School of Bio and Chemical Engineering within the Department of Biotechnology at Sathyabama Institute of Science and Technology, a deemed university located in Chennai, Tamil Nadu, India. During her third year, she engaged in research involving leaf disease detection using CNN (Industry 4.0) and also gained valuable experience through an internship in the field of bioinformatics. Currently, she is actively involved in a research project under the guidance of Dr M. Rajasekar. Her research pursuits predominantly revolve around the intriguing realm of cosmetics-based products and their significance. Biography Dr M. Masilamani Selvam was born on 15th May, 1976 at Sundaranachiapuram, Rajapalayam Taluk of Tamil Nadu, India. After his schooling in his native place, he started his bachelor degree course (BSc, Zoology) at ANJA College, Sivakasi, and then moved to CAS in Marine Biology, Annamalai University for his higher studies (MSc Coastal Aquaculture) and research (PhD Marine Biology). After obtaining his doctoral degree he has started his career as Lecturer in Biotechnology at Sathyabama University on 2008. Ever since, his research work has been directed to the field of Marine Biotechnology and Aquaculture. He has also been awarded M. Tech. Degree in Biotechnology at Sathyabama University on 2010. He has received several prestigious fellowships like the DST-Govt. of India sponsored BOYSCAST Fellowship to work at Göttingen University, Germany on 2011, and INSA Summer Fellowship to work with Prof. K. Kathiresan on 2013. The author has been working as an Associate Professor in the Department of Biotechnology, Sathyabama Institute of Science and Technology, Chennai, India since 2015, and he has published 2 books, 3 book chapters and more than 57 research articles to his credit.
CC BY
no
2024-01-16 23:43:49
RSC Adv.; 14(4):2529-2563
oa_package/03/02/PMC10788710.tar.gz
PMC10788711
0
Introduction The incidence of thyroid carcinoma (TC) continues to increase worldwide due to the increasing number of papillary thyroid microcarcinoma (PTMC) diagnoses 1 , 2 . PTMC diagnosis is now more prevalent due to the rapid popularization of diagnostic imaging techniques, such as ultrasound and computed tomography (CT) 3 , 4 . Over 85% of TCs originate from follicular cells and are classified as papillary TC (PTC), which is considered a well-differentiated low-risk carcinoma. Only some rare subtypes, such as tall cell and columnar cell variants, are considered high-risk TCs 5 . Since 2010, the Korean Thyroid Association revised the guidelines for the diagnosis and management of thyroid nodules (TN) and TC 6 , 7 . These guidelines recommend fine needle aspiration cytology (FNAC) for > 10 mm nodules in patients with TC risk factors or cervical malignancy characteristics, which led to a significant increase in PTMC incidence in Korea. With this increasing incidence also came attention to the three “over” problems associated with PTMC: overdiagnosis, overtreatment, and over-staging. In studies of other cancers, breast cancer (BC) was found to be the most commonly overdiagnosed, followed by prostate cancer (PC), lung cancer, and TC 8 . At present, multiple guidelines recommend a non-surgical strategy for PTMC, advocating active surveillance (AS) for low-risk PTMC instead of immediate surgery (IS) and converting to surgery only in case of tumor progression. In 2015, the American Thyroid Association (ATA) recommended that fine needle aspiration (FNA) not be conducted for small untouchable TN (< 1.0 cm) 9 . However, contrary to the ATA guidelines, the Japan Association of Endocrine Surgeons recommends early fine needle puncture to promote staging and to guide clinical strategies 10 . A prospective study in Canada showed that 71% of the patients with low-risk PTC (< 2 cm) preferred AS rather than IS 11 . However, this was the initial preference of the patients, and the proportion of continuous AS may decrease with an increase in the follow-up time and change in psychological factors. Much of the current controversy about overdiagnosis and overtreatment has focused on low-risk PTC because of a substantial reservoir of subclinical cancer and stable overall mortality 12 . Although most low-risk PTCs are indolent, some of them show aggressive behavior, accompanied by lymph node metastasis 13 , 14 and/or distant metastasis (0.5%) 15 . In addition, PTMC can be sporadic or non-sporadic, with sporadic PTMC having a lower incidence of lymph node metastasis and a lower risk of recurrence than non-sporadic PTMC 16 . Therefore, conservative treatment is recommended for patients with sporadic PTMC. Additionally, tumor location and other factors also affect the treatment strategy. For example, IS is recommended for PTMCs with potential recurrent laryngeal nerve or trachea involvement 17 . In this review, we discuss the advantages and disadvantages of AS and IS for providing better clinical strategies for PTMC patients.
Conclusion and prospects Several studies on low-risk PTMC patients confirmed the effectiveness and relative safety of AS, which was accepted by more patients and doctors. However, the indications and safety of AS in PTMC still need to be confirmed by large sample clinical studies. How to early detect PTMC with aggressive biological characteristics is one of the future concerns of AS. It is hoped that in the future, AI and machine learning will be used to judge the nature of tumors, and FNA will be used for molecular diagnosis to identify some tumors with rapid growth or metastasis potential, thereby improving the effectiveness and safety of AS. However, multi-disciplinary decisions are required to formulate the preferences and risk tolerance of different individuals to help patients choose between IS and AS for PTMC management.
Competing Interests: The authors have declared that no competing interest exists. Overdiagnosis of papillary thyroid microcarcinoma (PTMC) is prevalent, and effective management of PTMC is an important matter. The high incidence and low mortality rate of papillary thyroid carcinoma (PTC) justify the preference for active surveillance (AS) over immediate surgery (IS), particularly in cases of low-risk PTMC. Japan began AS in the 1990s as an alternative surgical option for PTMC and it has shown promising results. The safety and efficacy of AS management in PTMC have been verified. However, AS may not be suitable for all PTMC cases. How to find the balance between the decision-making of AS and IS requires careful consideration. Therefore, we collected and analyzed the relevant evidence on the clinical strategies for PTC and discussed AS and IS from the perspectives of health, economic, and psychological aspects, to help clinicians in choosing a more appropriate clinical strategy for PTC.
The incidence of thyroid carcinoma has changed with the update of guidelines At present, the incidence of several cancers, such as TC, BC, and PC, is on the rise, but mortality has not significantly increased. The improvement in diagnostic accuracy and reduction in mortality for BC and PC are associated with early diagnosis and intervention, suggesting that early clinical intervention is beneficial for these two cancers. Welch et al. proposed two explanations for the overdiagnosis of cancers: 1) that the suspected cancer did not progress or 2) that cancer progressed so slowly that there were no symptoms before the death of the patient besides for other reasons 18 . However, the prerequisite for these explanations is that the tumor does not progress or that progression is not life-threatening. In 1999, South Korea launched a National Thyroid Examination Project, which increased the incidence of TC in South Korea by 15-fold 19 , and a similar phenomenon was observed in other countries 20 . The incidence of TC began to decline after South Korea discontinued the Nationwide Thyroid Examination Project in 2014 21 . Therefore, screening of all ages and populations is not appropriate, but it should be recommended in people with high-risk occupations and at the age of high incidence. A study by the National Cancer Institute of the United States (US) on TC prognosis revealed that the life table estimate of their 20-year cancer-specific survival rate of patients who received immediate treatment and those who did not receive immediate treatment were 99% and 97%, respectively 22 , suggesting a similar prognosis for both strategies. This implies that although PTC progresses slowly, it may still pose a potential threat to life. The incidence of TC continues to rise steadily in all high-income and developing countries, especially in China, Colombia, Lithuania, and Belarus, and especially in middle-aged women (35-64 years old) 23 . One of the major reasons for the rapid rise of TC is that a large number of PTMCs are diagnosed. In China, the incidence of PTMC increased to 32.1% from 2000 to 2014 24 . A study based on Surveillance, Epidemiology, and End Results (SEER) revealed that the incidence of PTC increased during 2000-2009 (APC 6.80, [95% confidence interval (CI) 6.46-7.13]), began to slow down during 2009-2014 (APC 2.58 [CI 1.71-3.47]), and declined annually since 2014 (APC -2.33 [CI 3.15 to -1.51]). In addition, distant metastasis of PTC decreased significantly from 2015 to 2018 (APC -17.86 [CI 26.47 to -8.25]). However, the mortality based on incidence increased during 2000-2018 (average APC 1.35 [CI 0.88-1.82]) 25 , indicating that although the incidence declined after following clinical guidelines, the mortality based on incidence did not improve. The European EUROCARE-2 study in 1985-1989 revealed that the 5-year overall survival (OS) of male and female TC patients was 72% and 80%, respectively 26 . However, in this period, TC was not classified into different pathological types. Age-standardized relative survival (RS) rates for PTC during 1990-1994 in the EUROCARE-3 survey were 91% for men and 96% for women 27 . Under the EUROCARE-4 study (2000-2002), Although classification by pathology was not performed, the age-adjusted 5-year survival rate for TC overall was 83.2%, compared with 93.5% for the contemporaneous US SEER-13 registries 28 . The reduction in TC mortality in the US during the same period may have been a benefit from adherence to guidelines 29 . The British summarized the reasons for the poor outcomes for TC compared to the European average to be due to incompliance the guidelines. When the UK began to promote clinical guidelines for TC, under the EUROCARE-5 study (2000-2007), the 5-year RS of PTC for men and women improved significantly, to 94% and 98%, respectively 30 . The results of the EUROCARE-6-based study have not yet been published, and it is expected that more refined data on differentiated thyroid cancer (DTC), such as those on PTMCs, will be available for publication (Table 1 ). A study of PTMC in the US from 1995 showed that 16.7% of PTMC had recurrence 31 . In a prognostic study of PTMC which began in 2000, also from the US, the recurrence rate after surgery was about 3% 32 . The data of these retrospective studies were collected before and after the publication of the ATA guidelines, and it may be inferred that the guidelines help improve the prognosis of PTMC. In addition to this, another National Cancer Database (NCDB) based study showed that patients with DTC who followed 2009 ATA or National Comprehensive Cancer Network (NCCN) guidelines had a significantly better 15-year DSS than those who did not follow the guidelines (78% vs 68%) 33 . A retrospective study based on the SEER database showed that more patients with DTC received recommended surgical treatment after the publication of the 2006 ATA guideline, and the results indicated a significant improvement in 5-year DSS 34 . The NCDB database and SEER database represent the majority of DTCs in the US, so these data may indicate that updated guidelines help improve DTC outcomes. After a period of high incidence, due to the update of guidelines, the treatment strategy of TN or PTMC leaned towards non-surgical treatment 9 , 35 , which may be one of the reasons for the decrease in incidence in recent years. In the 1990s, the treatment effect of TC in the US was similar to that of BC today, and although the incidence of TC is rising and the mortality is declining 36 , it does not imply that TC is inert. On the contrary, it is this decline in mortality that proves the effectiveness of normative treatment. The decline in mortality may not be the result of AS, but the outcome of early detection and early diagnosis. Comparison of advantages and disadvantages of AS and IS in PTC AS was initially used in patients with localized PC and has been used in a variety of cancers, such as urethral cancer and intraocular melanoma 37 , 38 . These tumors are characterized by slow progress but with a risk of progression or metastasis. Sugitani et al recently published a survey emphasizing that > 50% of the low-risk PTMCs in Japan are under AS 39 . At present, AS is not only limited to PTMC but also PTC with larger diameters, and even PTC with lymph node metastasis have been added to the AS cohort 40 . A study showed that despite thyroidectomy, the cancer-related mortality of patients with PTC < 2 cm accounted for 12.3% of the total PTC deaths 41 . Following clinical guidelines, early diagnosis, and the participation of multidisciplinary management have decreased the mortality of most tumors. A retrospective study on AS found that there was no significant relationship between serum thyroid-stimulating hormone concentration and PTMC progression 42 , 43 . However, another study conducted in Kuma Hospital in Japan showed that levothyroxine treatment in patients is associated with decreased tumor growth during AS, but further studies are needed to confirm this result 44 . Therefore, at present, there is no accurately monitored serum marker for AS follow-up in addition to ultrasound, and due to the lack of radiology or genetic indicators for PTMC progression, the risk criteria of PTMC cannot be determined at present with AS 45 . Advantages of AS Since the 1990s, Kuma Hospital in Japan has been using AS instead of IS for low-risk PTMC patients, and it has been implemented in other hospitals in Japan, the US, Korea, Italy, and China. Satisfactory results have been obtained after a long follow up. In a 30-year cohort study of AS and IS at Kuma Hospital in Japan, the 10-year and 20-year tumor growth rates were 4.7% and 6.6%, respectively. Only one patient in the AS subgroup developed distant metastases and none of the patients in this study died of TC 46 . This long-term study confirmed the safety and feasibility of AS. In a prospective study of low-risk PTC, clinical outcomes were similar for AS and IS after a median of 37.5 months 47 . More importantly, based on the current clinical research, TSH suppression therapy was shown to slow the growth of PTMC and improve the safety of AS 44 , 48 . The significant reduction in surgical complications is one of the advantages of AS. Thyroidectomy may cause permanent hoarseness, permanent hypoparathyroidism, and iatrogenic hypothyroidism in 1%, 2%, and 4% of TC patients, respectively 9 , 49 , 50 . Additionally, the permanent injury of postoperative hypocalcemia may be accompanied by serious symptoms or may even be life-threatening, requiring alternative treatment and long-term monitoring after discharge 51 . IS may be associated with a higher rate of complications. However, there was also no increased risk of persistent or recurrent structural disease in AS compared to IS 47 , 52 . A recent study showed that there was no difference in surgical complications when AS was converted to surgery compared to IS 53 . Based on the results of a large number of cohort studies on AS and IS, AS in small low-risk (primarily papillary) DTC is a relatively safe management strategy 54 . AS can avoid long-term TSH suppression treatment, and not only stabilize patients' emotions 55 but also reduce the risk of osteoporosis and cardiovascular disease 56 , 57 . In addition, AS is not only beneficial for the psychological and economic state of the patients, but it also helps avoid surgical complications, which can often lead to a decline in the quality of life (QoL). A cohort study of 222 patients with 4 years of follow-up found that patients in the IS group suffered significantly more anxiety 58 . Considering that the incidence and mortality increase with age, elderly people are susceptible to anesthesia-related complications, which carry a 0.5% incidence in patients over 80 years 59 . Therefore, AS may be suitable for elderly patients as well as patients with multiple diseases who cannot tolerate surgery. Additionally, pregnant women are potential AS candidates, because thyroid hormones are critical for fetal development, and AS of low-risk PTMC patients can reduce the impact of thyroid hormone fluctuations on the fetus. However, if necessary, the best time for thyroidectomy is in the second trimester of the pregnancy 60 . A study in Kuma Hospital showed that 8% of the low-risk PTMC patients showed a ≥ 3 mm increase in tumor diameter during pregnancy 61 , and a retrospective study of 51 low-risk pregnant women with PTMC showed that 8% of the cases had tumor enlargement, but no cases had new lymph node metastasis 62 . Although this may be attributed to hormonal fluctuations, further research is required to determine whether pregnancy is a risk factor for PTMC progression. Disadvantages and limitations of AS The current version of the guidelines recommends AS for low-risk PTMC (T1aN0M0). However, the accuracy of the determination of low-risk PTMC remains unsatisfactory. A retrospective study of more than 900 PTMC cases showed that 9.6%, 5.6%, and 1.1% of the cases were in the T3, N1a, and N1b stages, respectively 63 . Another retrospective study of 108 patients showed that Kuma criteria could not accurately predict the risk of PTMC. Among 29 patients with low-risk PTMC before surgery, 10 patients were confirmed to have clinical progression by postoperative pathology 64 . Some PTMCs, although progressing slowly, can grow close to important blood vessels and nerves, and the enlargement of tumors can lead to vascular or nerve invasion. In DTC, vascular invasion is associated with tumor persistence/recurrence and short DSS 65 . A Japanese research model showed that the possibility of lifelong disease progression in young TC patients is as high as 60% 66 . Although multiple treatment modalities are now available for progressive thyroid cancer, a subset of resistant and progressive disease still can develop 67 - 69 . Furthermore, some PTMCs, although small in size, can be highly invasive 16 . Some pathological studies found that > 1 cm PTC may be associated with higher lymphatic vascular invasion (LVI) 70 , although the impact of LVI on the prognosis of PTC is controversial. Some previous reports suggested that PTMCs of < 5 mm are usually not invasive, while those > 6 mm have a higher risk of lymph node metastasis 71 . In a retrospective study, the sensitivity of correctly identifying lymph node metastasis in the central group was 22.6-55% 72 . However, for micro-metastasis of lymph nodes, preoperative ultrasound sensitivity is also very low (26-56.2%) 73 - 75 . Some PTMCs even showed skip metastases 76 . At present, molecular diagnosis of PTMC by FNA is still difficult, as gene mutations like BRAF cannot be identified. However, high-resolution ultrasound and ultrasound-guided FNA biopsy (FNAB) can be used to diagnose TN of ≥ 3 mm diameter 20 , 77 , 78 . A SEER database study compared the biological behavior of the diffuse sclerosing variant, tall-cell variant, and classical PTMC and found that the former two subtypes were more invasive and had a greater probability of lymph node metastasis 79 . This pathological type seems to be difficult to identify by cytology. Similarly, LVI cannot be detected by preoperative ultrasound and cytology. An analysis of the NCDB showed that the presence of LVI is independently associated with reduced survival in PTC patients 80 , and since LVI is difficult to identify by AS, PTC patients with LVI may suffer from the risk of tumor progression or metastasis. A 10-year AS study revealed that 7-16% of the patients required conversion to surgery, due to tumor growth (4-8%), cervical lymph node metastasis (1-2%), or the progression of other thyroid/parathyroid diseases or personal reasons (2-6%) 81 . During the follow-up period, tumor enlargement or cervical lymph node metastasis was more likely to occur in young patients (< 40 years). An AS study in South Korea found that 14% of PTMCs can significantly increase in size, while 17% of TN can decrease in size during AS; however, these nodules were cystic and mostly benign 82 . The change in tumor volume is determined by its biological properties. Although the current study showed a small difference in prognosis between AS and IS, there is currently insufficient evidence for the safety of AS. Age is also a limitation of the AS selection strategy in PTMC. Compared to older PTMC patients, PTMC progression rate and tumor volume are higher in young PTMC patients 83 , 84 . In 2014, Ito et al. reported that the age of PTMC patients was an important factor in tumor enlargement and lymph node metastasis since the elderly group (> 60 years) were more likely to have tumor enlargement (p = 0.0014) and cervical lymph node metastasis (p < 0.0001) compared to the younger group (<40 years) 83 . The elderly seem to be more suitable for AS, but paradoxically, it seems that the elderly are more prone to aggressive tumor subtypes 85 . The recurrence rate of PTC in young people is higher, but the mortality is lower, while the elderly are more likely to show disease progression. However, a propensity score matching study showed that compared with the surgical group, the mortality of patients > 60 years would gradually increase with age 86 . A study based on SEER data suggests that men ≥ 45 years of age or with PTC ≤ 2 cm should at least receive a lobectomy of the thyroid gland 41 . With the update of guidelines, the age cutoff has been raised to 55 years old, but this does not reduce the risk of PTMC progression in elderly patients. Most of the PTMCs on AS are from Japan (Kuma Hospital in Kobe and Cancer Institute in Tokyo), while AS studies or the sample sizes in Europe and the US are insufficient. This racial difference will pose challenges in the global promotion of AS for PTMC. Current Indications and Suitable Candidates for AS Patients with asymptomatic PTMC, without clinically significant lymph node metastasis, invasion of recurrent laryngeal nerve or trachea, high-level cytological manifestations, and distant metastasis are potential candidates for AS 87 . However, it is difficult to effectively identify mucosal invasion of the trachea before surgery. Several studies have also demonstrated that AS is one of the viable alternative treatment strategies (Table 2 ). According to the recommendations of Memorial Sloan Kettering Cancer Center (USA), the ideal candidates for AS are older patients (> 60 years) with single focal PTMC and no evidence of lymph node metastasis 88 ; however, the AS strategy and the frequency of follow-ups for such patients were not discussed. As reflected in the latest version of the ATA guidelines, a lower-intensity treatment method can be adopted for low-risk TC, and AS can be used in appropriate patients to replace IS by observing the waiting and continuous neck ultrasound evaluation 7 . AS is recommended for low-risk single-focus PTMC patients without thyroid extravasation and cervical lymph node involvement 20 . However, the lack of a pathological diagnosis may lead to overlooking cervical lymph node micro-metastasis. At present, ultrasonic follow-up suggests that the following events be considered for PTMC patients under AS to switch to surgery: (1) the thyroid nodule increases by more than 3 mm compared with the initial value; (2) cytology confirmed metastatic lymph nodes in the neck; and (3) the excess of tumor volume increases by 30-50% 10 , 89 . At present, we tend to evaluate the safety of AS with the increase in tumor volume. Tuttle et al. conducted an AS study of 291 patients with PTC < 1.5 cm, and they observed tumor growth in 3.8% of the patients, with no local or remote metastasis during AS. However, it is worth noting that the median follow-up time was relatively short (25 months; 6-166 months) 89 . This does not indicate the safety of AS for this slow-growing tumor. The clinical intervention rate for disease progression in 10 years is 8% after AS for TC, most of which is due to tumor volume increase 90 . Whether patients with PTC are willing to coexist with the tumor is not only a psychological challenge but also a risk of tumor progression. The ATA guidelines define tumor enlargement as a 20% increase in the size of at least two nodules, with a minimum increase of 2 mm or a volume increase of more than 50% 9 . Sugitani and Ito et al. believed that for low-risk PTMC patients, AS is a reliable alternative to avoid unnecessary surgery and surgical complications, while high-risk PTMC patients should undergo total thyroidectomy 10 . According to the current Kuma protocol, PTC that met any of the following criteria was considered to belong to the high-risk group: tumor diameter > 4cm; tumor invasion of trachea or esophagus; diameter of metastatic lymph nodes > 3cm; distant metastasis. However, it may be difficult to distinguish low or high risk PTMC. NCCN suggest AS for low-risk PC patients with < 10 years of life expectancy, while it recommends either AS or surgical treatment for low-risk patients with >10 years of life expectancy 91 . A cohort study from Kuma Hospital in Japan showed that young patients (< 40 years) with PTMC are more likely to experience tumor growth 48 . Therefore, due to the lack of evidence, current guidelines do not recommend AS for children and adolescents younger than 20 years old 10 . At present, many countries and organizations have updated the AS management strategy for DTC in their guidelines (Table 3 ). In conclusion, AS is a safe one for PTMC and may replace one of the management options for surgical treatment strategies. But since there is still a lack of biological markers or imaging findings that reliably predict PTMC progression at present, more rigorous follow-up strategies and more appropriate indications for AS are needed. The psychological burden of AS and the choice of treatment strategy Terminology, a key factor in treatment decision-making There are still some obstacles to the implementation of AS. Jensen et al. found that social beliefs about cancer, unclear surveillance protocols, and lack of supporting data are considered barriers to AS implementation in PTC 92 . Terms such as cancer or carcinoma pose a great challenge to the psychology of patients, causing anxiety and panic. The moment when the doctors disclose the diagnosis and relevant information to the patients is crucial for patients to understand the severity of the disease. Additionally, the attitude of the doctors toward treatment also affects the patient's mood and decision-making strategies on treatment 93 . In 2016, a team classified the encapsulated follicular variant of PTC as a noninvasive follicular thyroid neoplasm with papillary-like nuclear features 94 , thereby avoiding some sensitive terminology to reduce patients' anxiety. This reclassification or renaming resulted in approximately 45000 patients avoiding surgical treatment every year worldwide. In a randomized controlled trial, patients chose different treatment plans with different terminologies used for the diagnosis; approximately 19.6% of the patients chose total thyroidectomy when “PTC” was used, while only 10.5% and 10.9% of the patients chose total thyroidectomy when “papillary lesion” or “abnormal cells” were used, respectively 95 . Therefore, emotions associated with disease terminology play a crucial role in treatment decisions. Fear and anxiety often cause patients to prefer thyroidectomy over AS. Additionally, thyroidectomy may give the patients as well as the surgeons a higher sense of security 96 , as more than 33.4% of the doctors have been prosecuted for the delay in diagnosis 97 . Among cancer patients, a major source of litigation is a delay in diagnosis, i.e., at an advanced stage of the disease, because clinicians do not schedule tests to detect cancer at an earlier stage. In TC, delay in diagnosis is the main cause of medical malpractice litigation in low-risk PTMC patients 98 . However, clinicians may not readily accept these changes in methods and terminology of non-surgical treatment until new and stronger evidence emerges 99 . Psychological burden of AS and IS Most AS patients switch to surgical treatment due to persistent anxiety rather than tumor progression. However, the main concern of AS is that it may miss high-risk pathological types requiring radical surgery, which may cause a psychological burden. The psychological burden associated with AS and IS are summarized in Table 4 . Patients with uncertain TN or PTC usually have a strong emotional response to their diagnosis, and their primary urge is to remove the tumor 96 . Therefore, the new AS guidelines suggest eliminating the fear associated with cancer in PTC patients. Another clinical study of 200 patients showed that approximately 75% of the patients chose AS due to the fear of taking thyroid hormone post-surgery; however, upon disease progression during the follow-up, they preferred surgery 100 . A study in the Netherlands showed that the health-related QoL (HRQoL) of AS and IS groups began to deteriorate over time; however, treatment with fluorodeoxyglucose positron emission tomography/CT (FDG-PET/CT) could help maintain a better HRQoL for one year 101 . Although this may not be economical, it can reduce the anxiety of the patients. A study revealed that the anxiety score of the AS group showed a downward trend, while that of the IS group was high for a prolonged period 102 . However, anxiety seems to decrease in AS patients after a certain period of follow-up, such as 5 years 103 . In contrast, Short-Form-12 or TC-QoL scores reveal that there is no difference in the QoL score between the AS and IS groups 104 . Another study found that AS relies on better medical institutions and is related to anxiety and depression scale scores 105 . In 2018, an 8-month follow-up study of the AS and IS groups in Korea showed that although the mental health status of the AS group was better than that of the surgical group (7.4 ± 1.3 vs 6.9 ± 1.6, p = 0.004), the group faced a greater fear of tumor progression or recurrence 106 . Another AS QoL prospective study showed that PTMC patients receiving lobectomy treatment had more health complications than those receiving AS, mainly manifested as neuromuscular (p = 0.020), throat/mouth (p = 0.043), and scarring issues (p < 0.001) 104 . However, there are some opposing views. A State-Trait Anxiety Inventory study in Japan showed that the AS group had higher scores in anxiety (95% CI, -0.03-1.1; p=0.068) 107 . A study in South Korea showed that after two years of AS, about 18% (101/561) of the patients withdrew from AS; among these, QoL did not decline for patients who switched to surgery due to disease progression, while the postoperative score declined for patients who switched to surgery due to anxiety or other thyroid diseases 108 . However, another study revealed that treatment of TC, especially PTMC, with IS did not improve the psychological distress and sleep disturbance of patients compared with AS 109 . Therefore, based on the above evidence, we believe that to avoid psychological burden, it is important to eliminate the decline in QoL caused by psychological factors and to avoid long-term follow-up and constant anxiety caused by repeated ultrasonic examination. Thus, although ATA does not recommend FNA for TN < 1 cm, regardless of the ultrasound results, most people may opt for thyroidectomy for peace of mind 9 , 90 . Comparison of the economic advantages of AS and IS The medical security of all the countries bears a heavy burden on the global economic slowdown. All the countries are looking for positive coping strategies for TC, and AS is an effective means to improve the burden of medical expenses. Although the medical systems of each country are different, the overall costs of AS and IS can be compared within the same medical system. A study on the cost of AS and IS in Kuma Hospital showed that, in the absence of delayed surgery, the total cost of IS in 10 years was 4.7-5.6 times that of AS, while when switching from AS to surgery, IS was 4.1 times that of AS 110 . Similarly, another study revealed that the total cost of IS in 10 years is 4.1 times that of AS 82 . However, this is only a comparison of the total costs in 10 years, without considering the changes in long-term economic benefits. Different strategies should be adopted for PTMC patients based on the medical costs and healthcare systems in different countries. A study based on the Markov decision tree model shows that the incidental PTMC patients chose non-surgical treatment to save costs in 16 years compared with early surgical treatment, and after 17 years, although each patient spent more than USD 682.54, it gained an additional 0.260 quality-adjusted year 111 . A similar Markov microsimulation model analysis shows that for small and well-differentiated PTCs, the ATA 2015 guidelines are highly cost-effective compared with the ATA 2009 guidelines, primarily because AS reduces the incidence of adverse events caused by surgery 112 . Another study on 349 AS PTMC patients showed that the total cost of AS would increase after 16 years, compared with surgical treatment 113 , eventually causing economic challenges for young AS patients. Another Markov decision tree model shows that surgical intervention is cost-effective in patients aged 40-69 years, and AS is more cost-effective than lobectomy for those > 69 years of age, with 17.3 quality-adjusted life years 114 . In addition to the comparison between IS and AS, the management of AS also has some economic research results. For instance, a 12-month RCT study found that close observation was more cost-effective than FNAC for patients with 1.0-2.0 cm TN 115 . Another study showed that for 1.0 cm moderately suspicious TN, the cost of ultrasonic monitoring decreased by USD 1829 and increased by 0.016 quality-adjusted life years, compared with FNA 116 .
We thank Bullet Edits Limited for the linguistic editing and proofreading of the manuscript. Funding This work was supported by the Applied Basic Research Program of Liaoning Province (2022020225-JH2/1013) and Science and Technology Project of Shenyang City under Grant 21-173-9-31. Author contributions Qi Liu: Data curation and writing (original draft preparation). Mingyuan Song: Data analysis and tabulation. Hao Zhang: Supervision, validation, and editing.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1009-1020
oa_package/97/9e/PMC10788711.tar.gz
PMC10788712
0
Background Kidney cancer (KC) ranks as the third most common malignant tumor in urinary system following bladder cancer and prostate cancer. More than 80% of KC are renal clear cell carcinoma, whose incidence is increasing year by year 1 , 2 . At present, the main treatment of renal cancer is surgery. Nevertheless, due to the lack of typical symptoms in the early stage, the diagnosis of renal cancer is in the middle and late stage. Among it, about 30% of the patients have spread when they are diagnosed, thus missing the best opportunity of radical surgery 3 , 4 . In addition, renal cancer is not responsive to radiotherapy and chemotherapy. This presents a challenge in treating patients with advanced renal cancer, as their prognosis is poor and survival rate low 5 - 7 . Currently, surgical treatment remains the only effective option for curing renal cancer due to the lack of targeted drugs for this specific indication. Thus, it is of importance to reveal the molecular mechanism of KC and to diagnose and treat it early. MiRNA is associated with tumor replication, migration and invasion 8 , 9 . Recent research has revealed that a variety of miRNA are abnormally expressed in KC programmed death spread 10 - 12 . For instance, miR-135a is the 4th small RNA molecule with the ability to suppress cell replication 13 . It can induce programmed death of renal cancerous cells and impede their replication 14 , 15 . However, the specific mechanism of its action is yet to be determined. Therefore, further study on the involvement of miR-135a in KC has certain practical significance for the diagnosis and treatment of KC, and may also enrich the etiology and molecular pathology theory of KC. Phosphoprotein phosphatase2A (PP2A) is the main Ser/ Thrphosphoprotein phosphatasein eukaryotes, which can dephosphorize protein and play a regulatory role in the physiological processes of cell differentiation, metabolism, replication, programmed death and cell transformation 16 , 17 . Due to the diversity of the B family of PP2A, PP2A regulation is complex with numerous substrates. Therefore, the role of PP2A holoenzymes with varying regulatory subunits in tumorigenesis and development necessitates further research 18 . Akt is a very important protein kinase in the cellular signaling network of PI3K/Akt 19 . Extracellular signal-regulated kinase (ERK) is an important subfamily of mitogen-activated protein kinases. It belongs to tyrosine protein kinases which can only exert their activity after phosphorylation 20 . The AKT and ERK1/2 signaling pathways have been thoroughly researched and demonstrated to have vital functions in the development and advancement of different tumors. They are recognized as important regulators of cancerous cell proliferation, survival, and invasion. Consequently, controlling the regulation of these two signaling pathways could have crucial implications in hindering tumor growth. Previous studies have revealed possible links between miR-135a and AKT and ERK1/2 signaling pathways. MiR-135a can regulate the activity of target genes associated with AKT and ERK1/2 signalling pathways, indirectly affecting them. Further investigation into how miR-135a interacts with these pathways could elucidate its mechanism in inhibiting KC growth. If miR-135a is proven effective in activating these signaling pathways to inhibit tumor growth, it could provide new possibilities for KC therapy. At present, there is no research available on the targeted regulation of PP2A by miR-135a in renal carcinoma, nor any studies conducted on the relationship between miR-135a and Akt or ERK1/2 signal pathways. Therefore, it is necessary to further explore the involvement of miR-135a in KC as well as its related mechanisms of acting with PP2A, Akt and ERK1/2 molecular signaling networks in KC.
Materials and Methods Cell Lines and Cell Culture ACHN and A498 cell lines were bought from Guangzhou Jennio Biological Technology Co., Ltd, and ACHN cell was cultured in High glucose DMEM medium supplemented with 10% FBS and 5% penicillin-streptomycin (PS) (100×). A498 cell was cultured in RPMI-1640 medium supplemented with 10% FBS and 5% PS (100×). MiRNA Transfection The DNA fragment for miR-135a, control miRNA, and siRNAs were obtained from Gene Copoeia, Inc and inserted into lentiviral expression plasmids. The human renal cancer cell lines ACHN and A498 were cultured in appropriate media with 10% FBS supplementation for 24 hours before transfection. The transfection procedure involved adding miR-135a, miR-135a inhibitor, and control viruses to the culture medium. After transfection, cells were cultured in media supplemented with 10% FBS and 0.5 μg/mL puromycin for selection. Luciferase activity assay This method was conducted in accordance with the manufacturer's protocol of Dual-Luciferase Reporter 1000 Assay System. 293T cells were plated in 24-well cell culture clusters (Corning Incorporated; Corning, NY, USA). Upon reaching 70% confluence, the cells were co-transfected with hsa-miR-135a/control miRNA along with the 3'-UTR fragments of PP2A-Cα, PP2A-Cβ, or PP2A-B56-γ. After 48 hours of transfection, the cells were collected for the assessment of firefly and Renilla luciferase activities. Renilla luciferase activities were utilized for normalization of transfection efficiency. QRT-PCR Extraction of total RNA was conducted using AG RNAex Pro RNA extraction kit AG21102, followed by measurement of RNA purity and concentration via NanoDropTM 2000 spectrophotometer. Synthesis of complementary DNA (cDNA) was conducted via Evo M-MLV Mix Kit with gDNA Clean for qPCR AG11728, and detected in the QuantStudio TM 5 Real-time PCR instrument. Real-time quantitative PCR was conducted using the SYBR Green PremixPro TaqHS qPCR Kit (Rox Plus), with AG11718.U6 as the loading control genes and miR-135a, following the manufacturer's instructions outlined in Table 1 . Gene expression was calculated by the 2- ΔΔCt method. Western Blotting Cells were lysed with RIPA buffer and total proteins were extracted. Western blotting was carried outusing standard procedures. The proteins underwent SDS-PAGE separation and subsequent transfer onto a PVDF membrane. The blotted membranes were incubated with anti-PP2A C subunit antibody, anti-PP2A-B56-γ antibody, anti-PP2A-Cα antibody, anti-PP2A-Cβ antibody, AKT antibody, P-AKT antibody, ERK1/2 antibody, P-ERK1/2 antibodies, oranti-β-tubulin antibody, respectively, and then probed with a secondary antibody (1:10000,), with β-tubulin as a loading control. Cell Replication (CCK-8) Four separate cultures of cells were transfected with control miRNA, miR-135a, control miRNA inhibitor, and miR-135a inhibitor, and each culture was seeded in an individual 96-well cell culture cluster. The cultures were then maintained for a period of 5 days. To assess cell replication, the cell proliferation reagent CCK-8 was introduced into each well and incubated for an additional 1 hour. The absorbance at 450 nm was subsequently measured. Colony Assay For the colony assay, a total of 1000 cells were plated in 60mm plastic dishes containing 3 mL of DMEM medium supplemented with 10% FBS. The dishes were then incubated at 37°C in a humidified atmosphere with 5% CO2. ACHN cells were cultured for three weeks, while A498 cells were cultured for two weeks. After the incubation period, the colonies were stained with CBB and counted. All studies were conducted with 3 replications. In vitro Scratch Assay The cells were plated in a 12-well cell culture plate for In vitro scratch assays, cell monolayers were scratched with a sterile pipette tip after cells reached 100% confluence. The cell migrations were observed for up to 24h. All percent of wound closures were conducted with 3 replications, and per replicate was calculated for five randomly chosen fields. Effect of miRNA-135a on tumor growth in nude mice for subcutaneous tumorigenesis detection The nude mice used in the experiment were purchased from China Hunan Slack Jingda Laboratory Animal Co., Ltd. Firstly, according to regulations, disinfect and prepare the breeding room and cage in advance, and then test nude mice for one week to adapt to the environment. The experiment was divided into a control group and a miRNA-135a group. The miRNA-135a group of nude mice injected miRNA-135a transfected cancer cell A498 subcutaneously into the nude mice; The control group injected control miRNA transfected cancer cell A498 subcutaneously into the nude mice. During this period, the growth status of nude mice was observed daily, and the size of subcutaneous tumors in nude mice was recorded using an electronic caliper. The weight of nude mice was recorded using an electronic scale, while the survival status of nude mice was also recorded. When the subcutaneous tumor of nude mice grows to 1.0-1.5 centimeters, the nude mice are euthanized using the neck removal method, and the tumor is carefully separated and measured in millimeters, accurate to two decimal places. Finally, calculate the volume of the tumor and perform relevant statistical analysis using the statistical software GraphPad Prism 9.0. Statistical Analysis All statistical methods were conducted using GraphPad Prism 5.0 (GraphPad Software, Inc., USA), and the results were presented ad mean ± S.D. Statistical significance was considered for P < 0.05, while highly significant differences were indicated for P < 0.01 and P < 0.001.
Results Construction of ACHN and A498 Renal Cell Lines with Stable Hyperexpression of MiR-135a, Reduced Expression of MiR-135a and Expression Independent Sequences MiR-135a plasmid contains green fluorescent protein gene, and miR-135a-suppress plasmid contains red fluorescent protein gene. Observation of fluorescence expression using an inverted fluorescence microscope allows for determination of plasmid infection and expression in KC. Purinomycin was utilized in screening for uninfected cells with the target gene, resulting in acquisition of ACHN and A498 KC cells with stable hyperexpression of miR-135a, reduced expression of miR-135a, and independent expression sequences. Total RNA was extracted, and then the expression of miR-135a was measured by fluorescence quantitative PCR in real time. The results showed that the expression of miR-135a in ACHN and A498 renal cancer cells containing lentivirus with hyperexpression of miR-135a was significantly greater than those in the control group (ACHN P < 0.001, A498 P < 0.001; Fig. 1 A). It could be seen that more than 95% of KC cells expressed green fluorescent protein. However, the expression of miR-135a in ACHN and A498 cells infected with lentivirus incorporating the gene of interest of miR-135a suppressor decreased significantly (Fig. 1 B). It can be seen that more than 95% of KC cells expressed red fluorescent protein. This indicated that the renal cell line with stable hyperexpression of miR-135a and reduced expression of miR-135a was successfully constructed. Hyperexpression of miR-135a could suppress the replication of KC, but had no obvious effect on the migration ability of KC CCK-8 experiment revealed that the hyperexpression of miR-135a could significantly suppress the replication of KC cells ACHN and A498 (ACHN P < 0.05, A498 P < 0.01; Fig. 2 A). However, the clone formation test showed that the number of cell clones formed after hyperexpression miR-135a was significantly lower compared to control group, and the results were consistent with the results of cell replication (Fig. 2 B). The results from the cell scratch test exhibited that there was no significant difference in the migration distance of ACHN and A498 cells with hyperexpression of miR-135a after 24 hours in comparison to the control group, as shown in Fig. 2 C. Therefore, these results suggest that miR-135a hyperexpression has the capability to hinder KC cell growth, but does not significantly affect their migration ability. Reduced Expression of MiR-135a Could Promote the Replication of KC, But Had No Obvious Effect on The Migration Ability of KC The results of CCK-8 experiment showed that the replication ability of ACHN and A498 renal cancer cells was enhanced after knockdown of miR-135a expression (Fig. 3 A). At the same time, its clone forming ability was also improved (Fig. 3 B). The results of cell scratch test revealed that the knockdown of miR-135a had no obvious impact on the migration ability of ACHN and A498 cells. After 24 hours, the migration distance of ACHN and A498 with reduced expression of miR-135a showed no difference from the control group (Fig. 3 C). This suggested that reduced expression of miR-135a could promote the growth of KC, but exert no impact on the migration ability of KC. MiR-135a Regulated Cell Replication Through PP2A-B56-γ, PP2A-Cα and PP2A-Cβ Target Scan predicted the subordinate genes of miR-135a and found that PP2A-B56-γ, PP2A-Cα and PP2A-Cβ might be the target genes of miR-135a (Fig. 4 A). The result of double luciferase test showed that miR-135a could bind to wild-type pp2a-b56-γ-3' UTR in a targeted way, and its fluorescence expression decreased ( P < 0.001; Fig. 4 B). Furthermore, it can specifically bind to the wild-type PP2A-C α-3' UTR and PP2A-C β-3' UTR, resulting in reduced levels of fluorescence expression (PP2A-Cα P < 0.001, PP2A-C β P < 0.01; Fig. 4 B). However, miR-135a had no effect on the 3'UTR of mutant PP2A-B56-γ, PP2A-Cα and PP2A-Cβ, and its fluorescence expression remained basically unchanged ( P > 0.05; Fig. 4 B). This indicated that PP2A-B56-γ, PP2A-Cα and PP2A-Cβ were the direct targets of miR-135a. Western blotting showed (Fig. 4 C) that the expression of PP2A-B56-γ, PP2A-Cα and PP2A-Cβ were down-regulated in KC after hyperexpression of miR-135a. The protein expression of PP2A-B56-γ, PP2A-Cα and PP2A-Cβ were all up-regulated in KC cells after knockdown of miR-135a. This suggests that hyperexpression of miR-135a might suppress the growth of KC by suppressing the expression of modulating genes PP2A-B56-γ, PP2A-Cα and PP2A-Cβ. MiR-135a Might Suppress the Replication of KC by Affecting Akt and ERK1/2 Pathways Our previous experiments have shown that the expression of miR-135a can suppress the replication of KC by down-regulating the expression of PP2A series proteins. At the same time, it is found that Akt and ERK1/2 may also be involved in the related mechanism of miR-135a in KC by consulting related literature and Western blotting. Western blotting showed (Fig. 5 ) that the expression of p-Akt and p-ERK1/2 were down-regulated in KC after hyperexpression of miR-135a, but the total level of Akt and ERK1/2 had no obvious change. The levels of p-Akt and p-ERK1/2 were found to be upregulated in KC cells following the knockdown of miR-135a. However, the total levels of Akt and ERK1/2 remained unchanged. These findings suggest that miR-135a overexpression may hinder the replication of KC by suppressing the phosphorylation of Akt and ERK1/2. Results of subcutaneous tumors in nude mice Previous experiments have shown that miR-135a may inhibit KC replication by affecting the PP2A, Akt and ERK1/2 pathways. This experiment selected nude mice for subcutaneous tumorigenesis to test the effect of miRNA-135a on tumor growth, in order to verify the inhibitory effect of high expression of miR-135a on KC growth. After 15 days of the experiment, the tumor size in the miR-135a group of nude mice was significantly smaller than that in the control group. The results showed that the growth of KC in vivo could be significantly inhibited by overexpression of miRNA-135a (Fig. 6 ).
Discussion MiRNA participates in tumor cell replication, migration, invasion and programmed death by degrading target mRNA and suppressing protein translation through complementing the sequence of messenger RNA 21 , 22 . At the same time, miRNA is involved in key processes contributing to the initiation and advancement of tumors 23 , 24 . Although the research reports on the link between miRNA and various tumors are constantly emerging, there is still relatively few research on renal cancer. This topic mainly studies the link between miR-135a and KC. Several studies have explored the expression of miR-135a in KC and have found its expression to be lower in KC than in normal tissues 14 . Furthermore, miR-135a has been found to have varying expression and functions among different types of tumors, indicating its significant role 25 - 28 . This study constructed ACHN and A498 renal cell lines with stable hyperexpression and low-expression of miR-135a by lentiviral infection. Further experiments confirmed that hyperexpression of miR-135a led to decreased replication ability in renal cell lines, whereas knock-down of miR-135a resulted in an obvious increase in replication ability. In subsequent subcutaneous tumor formation experiments in nude mice, the tumor size of nude mice in the miRNA-135a group was significantly smaller than that of the control group, which also verified the above observation. Additionally, the study indicated an inverse association between expression of miR-135a and PP2A gene, with a certain correlation to tumor initiation and advancement. The present study indicated that miR-135a could suppress the replication of KC by modulating PP2A gene. This is consistent with the conclusion of Japanese scholar Yamada et al 14 . that miR-135a shows reduced expression in KC (caki2 and A498) and can suppress the replication of KC by modulating the expression of oncogene c-MYC and the process of cell cycle. In addition, either hyperexpression or reduced expression of miR-135a exerted no impact on the migration ability of ACHN and A498 cells ( P <0.05). The specific mechanism needs to be confirmed by further experiments. Previous studies have shown that miR-135a regulates cancer cell replication and programmed death and also contributes to tumor invasion and spread. In 2020, Deng et al. 29 discovered that miR-135a can inhibit NSCLC cell replication, invasion, and spread through down-regulation of RAB1B and the RAS pathway, effectively suppressing the progression and spread of NSCLC. Studies have also shown that miR-135a can suppress the migration of gastric tumor cells by modulating the NF-κB pathway mediated by TRAF5 30 . This difference may be caused by different tissues. It is predicted that PP2A-B56-γ, PP2A-Cα and PP2A-Cβ were the targeted regulatory genes of miR-135a by Target Scan target gene prediction software, and verified that miR-135a could modulate the mRNA3'UTR of PP2A-B56-γ, PP2A-Cα and PP2A-Cβ by double luciferase test. Western blot analysis indicated that up-regulation of miR-135a suppresses the replication and development of KC by reducing PP2A gene expression. Conversely, the detection results of KC after down-regulation of miR-135a revealed an increase in the PP2A-B56-γ, PP2A-Cα, and PP2A-Cβ proteins. These results suggest that miR-135a regulates the PP2A series of target genes through post-transcriptional translation suppression. The present study innovatively revealed that PP2A could be modulated by miR-135a and played the role of oncogene in renal carcinoma. However, based on the prediction of target genes by bioinformatics, PP2A may be regulated by multiple miRNA other than miR-135a in renal cancer. For example, miR-183 can also regulate renal carcinoma 31 and promote cell growth, invasion, and spread by down-regulating PP2A, which has an anti-cancer effect in this study. This may be due to the complexity of the PP2A gene B family, which results in a diverse range of PP2A complexes and substrates, as well as different or even contradictory roles in the regulation of various genes. Therefore, the role of PP2A holoenzyme composed of different regulatory subunits in the initiation and advancement of needs further research and confirmation. In addition, the protein expression of phosphorylated Akt and ERK1/2 in KC cells after hyperexpression of miR-135a decreased obviously, while that of phosphorylated Akt and ERK1/2 was significantly up-regulated in KC cells after knockdown of miR-135a. This study demonstrates that overexpression of miR-135a may inhibit the replication of KC cells by suppressing the phosphorylation of Akt and ERK1/2. However, further research is needed to confirm the specific upstream activating proteins or downstream signaling proteins targeted by miR-135a. Additionally, the Akt and ERK1/2 signaling pathway could be one of the mechanisms through which miR-135a inhibits tumor cell replication. Further research into the mechanism by which miR-135a suppresses KC replication is crucial for understanding the self-regulatory replication process of KC and identifying potential therapeutic targets.
Conclusion Taken together, miR-135a can inhibit KC replication and tumor progression. Reducing endogenous miR-135a expression can increase KC replication. Moreover, this study showed that miR-135a can suppress the growth of kidney tumor cells through down-regulation of target genes PP2A-B56-γ, PP2A-Cα, and PP2A-Cβ. Additionally, miR-135a affects the AKT and ERK1/2 signaling pathways.
Competing Interests: The authors have declared that no competing interest exists. Background: Kidney cancer is a frequently occurring malignant tumor in the urinary system, with rising morbidity and mortality rates in recent times. Developing new biomarkers and therapeutic targets is essential to improve the prognosis of patients affected by kidney cancer. In recent years, miRNAs' role in tumorigenesis and development has received growing attention. miRNAs constitute a group of small non-coding RNA molecules that regulate gene expression, affecting various biological processes, including cell proliferation, differentiation, and apoptosis. Of the many miRNAs, miR-135a plays a pivotal role in several cancers. Nevertheless, the precise mechanisms and functions concerning miR-135a in renal cancer remain incompletely understood. Therefore, this study aims to analyze the effects of miR-135a on renal cancer replication and migration and its possible mechanisms, and to provide new strategies for the diagnosis and treatment of renal cancer. Methods: Renal cell lines (ACHN, A498) with stable hyperexpression of miR-135a and reduced expression of miR-135a were constructed by lentivirus packaging. The changes of replication, clone formation and migration ability of overexpressed miR-135a and overexpressed miR-135a in ACHN and A498 renal cell lines were detected. The possible mechanism of miR-135a affecting the replication of kidney cancer was analyzed by target gene prediction, double luciferase test, Western blotting and subcutaneous tumorigenicity assay in nude mice. Results: Hyperexpression of miR-135a can inhibit kidney cancer replication, whereas miR-135a knockdown potentially enhances replication. However, neither hyperexpression nor knockdown of miR-135a affects the migration ability of kidney cancer cells. The protein expression of PP2A-B56-γ, PP2A-Cα and PP2A-Cβ in renal cell line decreased after hyperexpression of miR-135a, while the protein expression of PP2A-B56-γ, PP2A-Cα and PP2A-Cβ increased after knockdown of miR-135a. In addition, the protein expression of p-Akt and p-ERK1/2 proteins in kidney cancer cells after hyperexpression of miR-135a were down-regulated, while the protein expression of p-Akt and p-ERK1/2 were up-regulated in kidney cancer cells after knockdown of miR-135a. In subcutaneous tumor formation experiments in nude mice, tumor size within nude mice in the miR-135a group was significantly smaller than in the control group. Conclusion: MiR-135a could suppress the replication of kidney cancer by modulating PP2A and AKT, ERK1/2 signaling pathways.
Funding This work was supported by National Natural Science Foundation of China (No.82300788), National Natural Science Foundation of China (No. 81402118) and Joint Project between Provincial Natural Science Foundation and Science and Technology of Hunan (No. 2022JJ70142). Ethics declarations All animal experiments complied with the ARRIVE guidelines and performed in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Approval of animal experiments was obtained from the Animal Research Committee of The Second Xiangya Hospital of Central South University. Data availability statement The datasets used and/or analysed during the current study are available from the corresponding authors on reasonable request. Author contributions Kangning Wang: Project development, Data Collection, Manuscript writing. Hege Chen: Data Collection, Data analysis. Xiang Chen: Data collection. Zesong Fang: Data collection. Enhua Xiao: Data analysis. Qiuling Liao: Project development, Data collection, Data analysis, Manuscript editing.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):999-1008
oa_package/4c/09/PMC10788712.tar.gz
PMC10788713
0
Introduction In pulmonary hypertension (PAH) increased pulmonary vascular resistance (PVR) is observed, which usually leads to right ventricular heart failure. The main symptoms from increased PVR are: dyspnea, fatigue, orthopnea, dizziness, fainting, non-productive cough, peripheral edema angina pectoris, finally in severe cases leg swelling. 1 Pulmonary hypertension progressive and is considered a fatal condition in the final stages. Decreased exercise tolerance and heart failure is also observed. Symptoms usually develop over the years and therefore diagnosis is delayed. However; there are patients which develop at early stages hemoptysis or have fainting symptoms and/or even syncope. In the case of venous hypertension shortness of breath is observed while lying flat, while in the case of pulmonary arterial hypertension this symptom is not observed. There are 5 types of PAH and therefore several tests have to be performed to distinguish pulmonary arterial hypertension from venous hypertension. Tests include blood tests to exclude HIV, pulmonary function tests, autoimmune diseases, arterial blood gas measurements, electrocardiography, liver disease, ventilation-perfusion or V/Q scanning to exclude chronic thromboembolic pulmonary hypertension and CT-angiography of the thorax. Lung biopsy is performed only in the case of an underlying interstitial lung disease. An easy method to evaluate the clinical improvement of these patients is the six-minute walk test (6MWT). It has been previously observed that improvement of the 6MWT values correlates with increased survival benefit. Furthermore; Blood BNP levels are considered marker for disease stability or progression for these patients. 2 PAH can be evaluated estimated in the everyday clinical practice with ultrasound echocardiography, however; the gold standard is the pressure measurement with a Swan-Ganz catheter. Right-sided cardiac catheterization is required for the diagnosis of pulmonary arterial hypertension. 3 Normal pulmonary arterial values are set to be between 8-20 mm Hg (1066-2666 Pa) at rest. Pulmonary hypertension diagnosed when the mean pulmonary artery pressure exceeds 25 mm Hg at rest. In order to administer proper treatment we have to evaluate whether the PAH is arterial, venous, hypoxic, miscellaneous or thromboembolic. For patients with left heart failure or hypoxemic lung diseases (groups II or III pulmonary hypertension), we should not administer endothelin antagonists, phosphodiesterase inhibitors, or prostanoids. 4 The first line treatment for pulmonary arterial hypertension is considered with; digoxin, diuretics, oxygen therapy and oral anticoagulants. Moreover; high dose calcium channel blockers can be administered for idiopathic pulmonary arterial hypertension patients. 5 , 6 There are several novel drugs investigated for PAH and the main method of effectiveness evaluation is still the 6MWT. 7 Tyrosine kinase inhibitors have been evaluated as a treatment for pulmonary hypertension. 8 - 10 Imatinib has been recently investigated against pulmonary hypertension. 11 - 15 Paclitaxel has been recently evaluated as a remodeling agent for PAH. 16 Sotatercept has been recently been evaluated in a clinical trial with favorable results (the PULSAR study) 17 . We investigated whether the drugs paclitaxel, sotetercept and iloprost could be administered as aerosol with jet and ultrasound nebulizers. Moreover; we evaluated the optimal combination of residual cup design, residual cup loading and nebulizer, in order to produce droplets of ≤5μm.
Materials and methods Drugs The drugs Ventavis® (iloprost, 15mg/ml, Bayer), Paxene® (paclitaxel, 150mg/25ml, Norton Healthcare, Ltd) and Sotatercept® (sotatercept 0.3mg/ml Merck & Co., Inc). Aerosol Production Systems Jet-Nebulizers and residual cups Three jet-nebulizers were chosen for the experiment: a) A Philips Respironics Innospire Essence Compressor Nebulizer System - SideStream Technology, Compressor max. pressure 317 kPa (46 psi), Respironics New Jersey, Inc, Parsippany, NJ 07054 USA, b) Maxineb ® (6 L/min and 35 psi), Hof, Germany and c) Invacare® (4-8 liters-minute and 36 psi) (Figure 1 ). We used 7 different residual cups. Four had a capacity of no more than 6 ml and two with a capacity no more than 10 ml. For large cups we used the letters; A, D and E. For small residual cups we used the letters; C, F, B and G. (Figure 2 ,3) Large residual cups were used with a capacity of 2-8 mls. The residual cup loadings were 2,4,6 and 8ml (8ml only for large cups). Ultrasound Nebulizers Three ultrasound nebulizers were purchased for our experiment. An Omron® NE-U07, Tokyo, Japan. Compact and weighs less than 350gm, includes a 10ml medication cup. Generates uniform micromillimetre-sized vapor particles. A Contec NE-M01 portable handheld Mesh Nebulizer, CONTEC medical systems Co., Ltd., UK was also purchased. The third ultrasound nebulizer was a portable EASYneb® II, FLAEMNUOVA, Martino, Italy. The loadings were 2 and 4 mls based on the residual cup capacity of each of the three ultrasound nebulizers (Figure 4 ). Droplet Measurement We used a Malvern Mastersizer 2000 apparatus (Malvern Instruments Ltd, Malvern, Worcestershire, UK) equipped with a Scirocco module (Malvern Instruments Ltd, Malvern, Worcestershire, UK) in order to calculate the size distribution of the droplets and their mean diameter ( d 32 ). A refractive index of 1.33 was been used for the sprayed droplets. Three experiments were performed for each combination. 18 - 22 Statistical analysis Jet technology There were 4 factors affecting the droplet size: two drugs (iloprost and sotatercept), 3 nebulizers (INVACARE, RESPIRONICS, MAXINEB), 7 residual cup/residual cup designs (A to G) and 3 loadings (2, 4, 6ml). A four factor ANOVA was performed with 0.05 probability reference level. Pair-wise statistically significant differences between means were examined using the 95% confidence intervals. Again, an ANOVA test was performed for the residual cups (A, D, E) that were filled with 8ml dose using the same drugs and nebulizers. Ultrasound technology Iloprost and sotatercept along with the nebulizers (EASYNEB, CONTEC, OMRON) were investigated at 2 dose levels (2, 4ml) for their impact on particle size formation. Mice One hundred twenty BALBC mice age 9 weeks old were purchased from the experimental laboratory of ``Theiageneio`` Anticancer Hospital, and were divided in 3 groups. The Institute has the following authorization for production and experimentation of mice EL 25 BIO 011 and EL 25 BIO 013. The mice included were isolated (1 per cage) in a temperature-controlled room on 12-hour light-dark cycle and were allowed free access to food and water. The Lewis lung carcinoma cell line was obtained by ATCC (CRL-1642TM). The cells were routinely cultured in 25-cm2 tissue culture flasks containing RPMI (ATCC, 30-2002) supplemented with 10% fetal bovine serum (Biochrom) according to the supplier's instruction. The cell line was incubated at 37 ο C in 5% CO2. The doubling time of the cell line was 21 hours. 33 At confluence, cells were harvested with 0.25% trypsin and then were re-suspended at 1,5×10 6 cells in 0.15 ml PBS (Phosphate Buffered Saline, Dulbecco, Biochrom) which was injected in mice. The back was inoculated subcutaneously (27-guage needle, 1,5×10 6 cells). The tumor volume was measured once weekly using bidimensional diameters (caliper) with the equation V=1/2 ab 2 , where the a represents the length and b the width (mm 3 ). The tumor was grown on the back of the mice. The animals were randomly divided into six groups of 20, when the tumor volume reached ~100mm 3 . The mice were divided into six groups as follows: a) paclitaxel, b) sotatercept and c) iloprost. Aerosol administration Aerosol was administered with the cage presented in figure 5 . This cage was specifically designed for this study with a special inlet that connects with the nebulizer reservoir. For every drug group, after initial experiments which are mentioned in detail, we chose the optimum combination that creates the smallest droplet. This information is provided in the results section below. Nebuliser Results Jet technology The main factors affecting the droplet formation were the drugs and the residual cup designs (Table 1 , p-values <0.001). Iloprost reduced the mean droplet size down to 1.37 μm when compared to sotatercept (2.23). The residual cups C and G have the lowest droplet size production 1.32 and 1.37 respectively. The rest of the residual cups have equal mean size droplets produced, however; with higher mean droplets. In the 6ml loading capacity the mean droplet size was higher than ≤6mls loading (p=0.048). Sotatercept produced larger mean droplets (2.57) (p=0.039) when combined with the nebulizer MAXINEB. Regarding loading with 8mls, it was observed that the mean droplet size did not differ versus 6mls and the experimentation was abandoned from further investigation. Ultrasound technology The drug, residual cup loading and mouthpiece, did not exert any statistically significant effect on particle size (Table 2 , p-values, 0.020, 0.036 and 0.043 accordingly). Iloprost again produces smaller mean droplet size than sotatercept (1.92<3.11, Table 3 ). The facemask produced slightly lower mean droplet size than the cone inlet (2.12<2.91). Moreover; the 2ml dose versus the 4ml (2.08<2.95). The cone inlet produced small mean droplets compared to the facemask with a droplet size between (2.10 and 2.05) (cone inlet/facemask) (see also Table 2 , p=0.038). Iloprost produces smaller mean droplet size with both jet and ultrasound nebulisers (1.37<2.23 and 1.92<3.11, jet and ultrasound respectively) and, even smaller mean droplet size with jet devices (1.37<1.92). Residual cup designs C and G contribute in the most efficient way in the production of the small mean droplet size uniquely and equally for both drugs. It was observed that sotatercept produced with the residual cup design C (1.37 instead of 2.23). Moreover; 2ml loading produces the smallest mean droplet size both for the facemask and cone inlet. The facemask and low 2ml residual cup loading is the best combination choice (2.08 and 2.12 accordingly). We could not assess the results of paclitaxel since paclitaxel formed some kind of colloid substance and the mastersizer operation was blocked and we had to clean the lances again and again in order for the equipment to work again. The colloid substance that was formed was due to the drug properties. Mice results All animals were killed on the 20 th day after the initiation of the administration. The mean volume values were recorded throughout the experiment and upon death or the last day of death the final measurement was included in our data for mean tumor volume measurement extraction. The mean volume measurements are as follow for each group (mm 3 ): a) 2132.4, b) 2361.2, c) 346.32. We used the following technique to measure the drug concentrations in the lung tissue of the mice. In the first step, the sample mass to be digested can be up to ca. 1.000 g however in some of the samples lower mass (e.g. 0.1500 g) was available and was finally digested. The procedure involved weighing of the samples in Teflon® (DuPont, DE, USA) crucibles, addition of 6 mL concentrated HNO 3 , and heating in a steel autoclave (Berghof, BTR 941, Eningen, Germany, six-position aluminum block). High pressure conditions in the closed vessels assist the decomposition of the sample tissues which is completed in less than 2 h at 130 0 C. The obtained solution was diluted to 25 mL by double de-ionized water. The analytical instrument that was used was an ICP-AES Optima 3100 XL (Perkin-Elmer, MA, USA) operated in axial-viewing mode and equipped with a segmented array charge-coupled device (SCD) as detector. The drugs were detected in all mice in their lungs, and therefore we had proof of concept. However; as it was anticipated emphysema damage was observed mostly in the paclitaxel group, less in sotetercpt group and even less in iloprost group. Figure 6 . In any case the drugs administered, their consistency has not been created for lung administration as aerosol.
Nebuliser Results Jet technology The main factors affecting the droplet formation were the drugs and the residual cup designs (Table 1 , p-values <0.001). Iloprost reduced the mean droplet size down to 1.37 μm when compared to sotatercept (2.23). The residual cups C and G have the lowest droplet size production 1.32 and 1.37 respectively. The rest of the residual cups have equal mean size droplets produced, however; with higher mean droplets. In the 6ml loading capacity the mean droplet size was higher than ≤6mls loading (p=0.048). Sotatercept produced larger mean droplets (2.57) (p=0.039) when combined with the nebulizer MAXINEB. Regarding loading with 8mls, it was observed that the mean droplet size did not differ versus 6mls and the experimentation was abandoned from further investigation. Ultrasound technology The drug, residual cup loading and mouthpiece, did not exert any statistically significant effect on particle size (Table 2 , p-values, 0.020, 0.036 and 0.043 accordingly). Iloprost again produces smaller mean droplet size than sotatercept (1.92<3.11, Table 3 ). The facemask produced slightly lower mean droplet size than the cone inlet (2.12<2.91). Moreover; the 2ml dose versus the 4ml (2.08<2.95). The cone inlet produced small mean droplets compared to the facemask with a droplet size between (2.10 and 2.05) (cone inlet/facemask) (see also Table 2 , p=0.038). Iloprost produces smaller mean droplet size with both jet and ultrasound nebulisers (1.37<2.23 and 1.92<3.11, jet and ultrasound respectively) and, even smaller mean droplet size with jet devices (1.37<1.92). Residual cup designs C and G contribute in the most efficient way in the production of the small mean droplet size uniquely and equally for both drugs. It was observed that sotatercept produced with the residual cup design C (1.37 instead of 2.23). Moreover; 2ml loading produces the smallest mean droplet size both for the facemask and cone inlet. The facemask and low 2ml residual cup loading is the best combination choice (2.08 and 2.12 accordingly). We could not assess the results of paclitaxel since paclitaxel formed some kind of colloid substance and the mastersizer operation was blocked and we had to clean the lances again and again in order for the equipment to work again. The colloid substance that was formed was due to the drug properties.
Discussion Our respiratory system has different defense mechanisms like; a thick layer of the mucus, the beating cilia and finally macrophages which interact with the aerosol droplet deposition. 23 Therefore an aerosolized drug has to efficiently pass firstly those factors. Furthermore; the aerosol droplets have to be of a mass median aerodynamic diameter ≤5μm. Due to the increased humidity of the respiratory system >90% the aerosol droplets tend to increase in size by almost 50% during their passage to the lower respiratory tracts. 23 Inhaled drug administration has been observed to be equally effective for many diseases due to the local effect with less drug dosage and therefore less adverse effects, such as in the case of chronic obstructive pulmonary disease (COPD). Inhaled insulin is another example. 24 - 29 Inhaled antibiotics and inhaled inhibitors for pulmonary hypertension are on the market 22 , 30 . There are still safety concerns for the lung parenchyma that are being investigated as in the case of tyrosine kinase inhibitors. 31 , 32 Tyrosine kinase inhibitors have been used for a decade to target non-small cell lung cancer with epidermal growth factor receptors. 33 Recently it was presented that TKIs are potent acute pulmonary vasodilators. 34 Sorafenib a multikinase inhibitor has been investigated as aerosol against vascular remodeling from arterial hypertension. 35 , 36 Imatinib another TKI inhibitor was successfully administered against pulmonary hypertension in a patient with chronic eosinophilic leukemia. 37 In another study imatinib was used directly against arterial hypertension.[14, 38, 39]A major drawback with inhaled drugs was observed when respiratory tract infection occurred and the pharmacokinetics of the drugs changed. 24 The factors that mostly influence the aerosol droplet production are; a) jet-nebuliser flow rate 40 , b) viscosity 40 , c) tapping of the residual cup during nebulisation 41 , 42 , d) chemical formula 43 , 44 , e) residual cup loading 45 , f) residual cup filling upon initiation of nebulisation 41 , g) design of the residual cup 46 , h) charge of the drug molecules 47 , i) concentration of drug solution and j) surface tension. Moreover; the salt concentration within the chemical structure of the drug formulation is responsible for the absorption of water from the environment. Platelet derived growth factor (PDGF) inhibitors have been also used against PAH. 48 Vascular endothelial growth factor inhibitors have been successfully used against PAH 49 , 50 It has been proposed that inhibiting the PDGF pathway is more efficient against PH since fibrinogenesis is blocked simultaneously 51 The rho-kinase (ROCK) inhibitors 52 , 53 and dasatinib has been used to induce PAH. 54 A new Syk kinase inhibitor is in being developed for inhalation by Pfizer and is being investigated in a Phase I study. 55 Rapamycin has been proposed as an antiproliferative agent for smooth muscle cells implicating that it can be used against PAH. 56 In our study iloprost produces smaller droplets compared to sotatercept, however; both drugs can be administered as aerosol (1.37<2.23 and 1.92<3.11, jet and ultrasound respectively). Jet devices produce smaller droplets for both drugs (1.37<1.92). Cup designs C and G produce the smallest drug aerosol droplets for both drugs. The residual cup C design has the ability to produce small aerosol droplet size for sotatercept when compared with other residual cup designs (1.37 instead of 2.23). Finally, at 2ml residual cup filling the facemask and cone mouthpieces perform equally at their best (2.08 and 2.12 accordingly). We concluded that paclitaxel cannot produce small droplets and is also still very greasy and possible dangerous for alveoli.
Competing Interests: The authors have declared that no competing interest exists. Background: Pulmonary hypertension is common symptom among several diseases. The consequences are severe for several organs. Pulmonary hypertension is usually under-diagnosed and the main symptom observed is dyspnea with or without exercise. Currently we have several treatment modalities administered orally, via inhalation, intravenously and subcutaneously. In advanced disease then heart or lung transplantation is considered. The objective of the study was to investigate the optimum method of aerosol production for the drugs: iloprost, paclitaxel and the novel sotatercept. Materials and Methods: In our experiment we used the drugs iloprost, paclitaxel and the novel sotatercept, in an experimental concept of nebulization. We performed nebulization experiments with 3 jet nebulizers and 3 ultrasound nebulizers with different combinations of residual cup designs, and residual cup loadings in order to identify which combination produces droplets of less than 5μm in mass median aerodynamic diameter. Results: We concluded that paclitaxel cannot produce small droplets and is also still very greasy and possible dangerous for alveoli. However; iloprost vs sotatercept had smaller droplet size formation at both inhaled technologies (1.37<2.23 and 1.92<3.11, jet and ultrasound respectively). Moreover; residual cup designs C and G create the smallest droplet size in both iloprost and sotatercept. There was no difference for the droplet formation between the facemask and cone mouthpieces. Discussion: Iloprost and sotatercept can be administered as aerosol in any type of nebulisation system and they are both efficient with the residual cups loaded with small doses of the drug (2.08 and 2.12 accordingly).
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):871-879
oa_package/1f/6a/PMC10788713.tar.gz
PMC10788714
0
Introduction Cervical cancer has been identified as the fourth most common cancer and the fourth leading cause of cancer-related deaths among women worldwide, with approximately 600,000 new cases and 340,000 deaths reported annually 1 . The pathogenesis of cervical cancer is a complicated biological process, which involves multi-stage, long-term progression, and multi-factor interactions. Cervical cancer typically originates from healthy cervical tissue and progresses through cervical intraepithelial neoplasia (CIN 1/2/3) to invasive cervical cancer 2 , 3 . Although the etiology of cervical cancer remains elusive, high-risk human papillomavirus (HR-HPV) is a significant risk factor, with subtypes 16 and 18 being the most prevalent. Despite the widespread occurrence of human papillomavirus (HPV) infections, more than 90% of women can clear the HPV infection within three years through their immune system after infection. Only 10% of patients may experience persistent HPV infection, and ultimately, less than 1% of those with persistent HPV infection will develop cervical cancer 4 . The reasons for carcinogenesis are possibly related to environmental factors, including dietary patterns and lifestyle choices. Consequently, exploring the factors that lead to persistent HPV infection and ultimately result in the development of cervical cancer has been a prominent research focus in recent years. The global prevalence of vitamin D deficiency has emerged as a significant health concern, which has raised concerns regarding its potential implications for persistent HPV infection and cervical cancer occurrence. The prevalence of vitamin D (VD) deficiency ranges from 6.9% to 81.8% in European nations and from 2.0% to 87.5% in Asian nations. In more than half the countries, VD deficiency is present in more than 50% of adult individuals 5 . The presence of vitamin D receptors (VDR) and VD-activating enzymes in immune cells, such as monocytes, macrophages, dendritic cells, and lymphocytes, suggests that VD may act as an immunomodulator by binding VDR 6 . Previous research indicated that women with compromised immunity were at elevated risk of harboring persistent HPV infection, which may subsequently progress to CIN and ultimately develop into cervical cancer 2 , 3 . It is well known that VD can facilitate bone formation and calcification by stimulating the absorption of calcium and phosphorus by intestinal mucosal cells 7 , 8 . Notably, there is a growing body of evidence suggesting that VD and its metabolites may exert a significant role in the prevention or treatment of gynecological cancers, representing a novel and distinct function of VD that diverges from its established function in the regulation of calcium and bone metabolism 9 . However, the precise anticancer mechanism of VD in cervical cancer is still unclear and requires further exploration. This study was proposed to elaborate on previous research regarding VD and VDR and cervical cancer, and to discuss the function and the underlying mechanisms of VD and VDR in cervical cancer.
Conclusions In summary, VD and VDR emerge as potential pivotal factors in the occurrence and progression of cervical cancer, potentially reducing disease risk. Although epidemiological studies have established associations between VD, VDR and cervical cancer susceptibility, the empirical support for their preventive efficacy remains notably reliant on a paucity of clinical trial studies. Existing studies underscores that the inhibitory effect of VD on cervical cancer may be mediated through various pathways and factors, including but not limited to the EAG potassium channel, HCCR-1, estrogen and its receptor, p53, pRb, TNF-α, the PI3K/Akt pathway, and the Wnt/β-catenin pathway. However, the extant literature in the realm of cervical cancer remains limited, with a conspicuous dearth in investigations exploring the intricate interplay among diverse molecular pathways and entities. In addition, while the association between VDR gene polymorphisms and cervical cancer has been elucidated to some extent, there remains an empirical void concerning the mechanistic underpinnings of these polymorphic loci changes in the context of HPV infection and VD. Moreover, an avenue of research conspicuously absent pertains to the relationship between genetic polymorphisms, dietary intake of VD and calcium, and their collective influence on cervical cancer. VD, as a vitamin, has a certain inhibitory effect on cervical cancer, thereby eliciting interest in the potential contributions of other vitamins in the context of this malignancy. This review briefly alludes to the possible effects of other vitamins on cervical cancer, and explores potential synergistic mechanisms between VD and its vitamin counterparts. While VD's traditional role lies in the regulation of calcium metabolism, the intricate interplay between VD, Ca, and other vitamins has received limited attention in the current literature. Many of the proposed mechanistic processes still require confirmation through future studies. In conclusion, VD presents a promising avenue for novel therapeutic approaches to cervical cancer. Further research endeavors should seek to elucidate the potential synergistic benefits of combining VD with other anticancer medications, thereby advancing our understanding of effective treatment strategies in this context.
*These authors have contributed equally to this work and share correspondence and last authorship. Competing Interests: The authors have declared that no competing interest exists. Several studies have investigated the relationship between vitamin D (VD) and its receptors (VDR) and the risk of cervical cancer. However, the underlying mechanisms that underpin these associations remain incompletely comprehended. In this review, we analyzed the impacts of VD and VDR on cervical cancer and related mechanisms, and discussed the effects of VD, calcium, and other vitamins on cervical cancer. Our literature research found that VD, VDR and their related signaling pathways played indispensable roles in the occurrence and progression of cervical cancer. Epidemiological studies have established associations between VD, VDR, and cervical cancer susceptibility. Current studies have shown that the inhibitory effect of VD and VDR on cervical cancer may be attributed to a variety of molecules and pathways, such as the EAG potassium channel, HCCR-1, estrogen and its receptor, p53, pRb, TNF-α, the PI3K/Akt pathway, and the Wnt/β-catenin pathway. This review also briefly discussed the association between VDR gene polymorphisms and cervical cancer, albeit a comprehensive elucidation of this relationship remains an ongoing research endeavor. Additionally, the potential ramifications of VD, calcium, and other vitamins on cervical cancer has been elucidated, yet further exploration into the precise mechanistic underpinnings of these potential effects is warranted. Therefore, we suggest that further studies should focus on explorations into the intricate interplay among diverse molecular pathways and entities, elucidation of the mechanistic underpinnings of VDR polymorphic loci changes in the context of HPV infection and VD, inquiries into the mechanisms of VD in conjunction with calcium and other vitamins, as well as investigations of the efficacy of VD supplementation or VDR agonists as part of cervical cancer treatment strategies in the clinical trials.
Metabolic and biological functions of VD VD is a crucial nutrient required for maintaining various daily activities. It serves as a potent precursor for the steroid hormone that mainly regulates the balance of calcium and phosphorus in the human body and contributes to bone mineralization. In recent years, numerous studies have linked VD deficiency to several extra-skeletal diseases, including cancer, high blood pressure, and autoimmune diseases 10 . VD can be obtained from two different ways: body synthesis and dietary intake. It is pertinent to note that dietary sources of VD include fatty fish (e.g. salmon: 441IU per 100g), fortified foods such as fortified milk (40IU per 100g) and fortified cheese (301IU per 100g), and to a lesser extent, egg yolks (218IU per 100g) and “sun-dried” mushrooms (154IU per 100g) 11 . However, dietary sources of VD alone are insufficient to meet the body's needs, making the primary source of VD synthesis in the skin via photochemistry in response to ultraviolet B (UVB) radiation. Currently, several types of VD exist, with VD 2 and VD 3 being the more important. VD 2 (ergocalciferol) is mainly produced in plants, while VD 3 (cholecalciferol), synthesized as a prohormone in the skin, is the body's primary source of VD 12 . However, VD itself is not biologically active and needs to undergo two hydroxylation processes to convert into its biologically active form, 1,25-dihydroxyvitamin D 13 . The liver converts VD into 25-hydroxyvitamin D (25(OH)D), which is the main type of VD in the body, and has a long half-life and a high serum concentration. The serum 25(OH)D level serves as an effective indicator of the body's VD levels. The kidneys then convert 25(OH)D into the active metabolite 1,25-dihydroxyvitamin D , of which 1, 25-dihydroxyvitamin D 3 (1,25-(OH) 2 D 3 , calcitriol) is the most active, which exerts its physiological effects by binding to VD receptors (VDRs) (Figure 1 ). Both 25(OH)D and 1,25-(OH) 2 D 3 bind to VD binding protein and are transported in the blood. As 1,25-(OH) 2 D 3 has a 1000-fold higher affinity for VDR than 25(OH)D, it is considered the primary effector of VDR binding 14 , 15 . The active metabolite 1,25-(OH) 2 D 3 binds to VDR and regulates various physiological processes in the body 16 , 17 . The structure and biological function of VDR VDR, as a transcription factor, reacts to 1,25-(OH) 2 D 3 and mediates its biological effects, such as calcium homeostasis and immune response 18 , 19 . When activated by 1,25-(OH) 2 D 3 , VDR promotes the expression of genes responsible for enhancing calcium absorption from dietary sources into the bloodstream 18 . Activated VDR can influence the immune system by modulating the production of cytokines, such as interleukin-2 (IL-2), and regulating the differentiation and function of immune cells 19 . VDR can heterodimerize with retinoid X receptor (RXR) to bind to response elements on target genes and thereby modulate gene expression 20 (Figure 1 ). The VDR gene is located in chromosome 12, with hundreds of single nucleotide polymorphisms. Promoter methylation can cause epigenetic changes, but its effect on VDR expression remains controversial 21 . The occurrence of VDR gene polymorphism may influence VD on various physiological processes 22 . For example, deleterious mutations in the VDR gene may cause inherited 1,25-(OH) 2 D 3 resistant rickets, wherein mutated VDR retains the ability to bind to 1,25-(OH) 2 D 3 but exerts an antagonistic influence on its biological outcomes 23 . VDR gene polymorphism (Fok1) may also be associated with calcium regulation 24 . VDR can also exhibit nongenomic effects that do not involve VD, which may be related to protein-protein interactions 25 . VD and cervical cancer Several ecological studies have investigated the potential impact of UVB exposure on the risk of cervical cancer through the modulation of VD levels. One such study of more than 70,000 cases of black and white patients by Adams et al using the data from the National Cancer Institute found a negative correlation between UV exposure and cervical cancer incidence in America 26 . Grant et al. also conducted a multifactorial ecological study in Caucasian Americans from 50 states, which demonstrated a negative association between UVB exposure and cervical cancer mortality 27 . Additionally, Chen et al. carried out an ecological study using data from the National Central Cancer Registries (1998-2002), and they reported a 13% decrease in the incidence of cervical cancer for every 10-unit increase in UVB 28 , while Grant found a correlation between increased UVB exposure (heat zone and latitude) and reduced mortality of cervical cancer, using data from the National Death Survey in China (1973-1975) 29 . Grant's study in twelve administrative areas in France showed a positive relationship between cervical cancer incidence and latitude 30 . Taken together, the findings of these ecological studies provide evidence supporting the hypothesis that increased VD levels resulting from UVB exposure may play a role in reducing the risk of cervical cancer (Table 1 ). However, it is important to note that ecological studies inherently lack the capacity to establish a causal relationship, thus underscoring the necessity for additional research in this regard. The inhibitory effect of VD on cervical cancer may be related to its potential association with HPV infection. VD has been shown to activate genes and pathways involved in both innate and adaptive immunity, indicating its role in the immune process 18 . Previous studies have demonstrated that VD may be utilized as a prophylactic and adjuvant therapy for diseases caused by impaired immune homeostasis 31 . Özgü et al. have suggested that VD deficiency may be a risk factor of HPV DNA persistence and related CIN 32 . In a cross-sectional survey of 4,343 women, Gupta et al. discovered that serum VD deficiency was associated with a higher risk of HR-HPV infection 33 . In a cohort study involving 7,699 female adults Chu et al. found that women with HR-HPV infection had lower serum VD levels 34 . Shim et al. observed an increased in HR-HPV infection rates for every 10ng/ml fall in serum 25(OH)D levels in a cross-sectional study of 2353 women in the United States (OR=1.14, 95%CI: 1.02-1.27) 35 , while Troja et al. found no correlation between HR-HPV infection rates and the increase of serum 25(OH)D level by 10ng/ml in a narrower age range of 30-50 years old 36 . It may be that age range has some influence on the association between VD and HPV infection 35 , 36 . Nevertheless, these findings have suggested that persistent HPV infection is linked to decreased immune function, and VD may reduce HPV infection rates by improving the body's immunity, thus reducing cervical cancer morbidity and mortality (Table 2 ). Numerous epidemiological studies have investigated the relationship between VD and cervical cancer. In a case-control study of 405 cervical cancer patients and 2,025 age-matched controls, Hosono et al. found that Japanese women who consumed higher doses of VD had a lower incidence of invasive cervical cancer, but not CIN3 37 . A randomized, double-blind, placebo-controlled parallel clinical trial involving 58 patients with CIN1 in Iran by Vahedpoor et al. revealed that long-term use of VD supplements for six months was associated with a higher proportion of CIN1 regression than the placebo group 38 . Meanwhile, Vahedpoor et al. 39 conducted the similar randomized controlled trial on 58 patients with CIN2/3 and found that the same VD supplement for six months had a positive impact on reducing the recurrence rate in CIN1/2/3. However, when CIN1 was excluded, the difference of recurrence rate between the intervention and placebo groups was no longer statistically significant. The same dose and duration of VD supplementation did not produce the same effect in CIN2/3 patients as in CIN1 patients, possibly due to the greater severity of cervical lesions and more severe symptoms in CIN2/3, which may require a higher dose and longer period of VD supplementation. While these studies have provided a foundation for further investigation, larger and more diverse sample sizes are generally preferred to draw more robust conclusions (Table 3 ). According to animal experimentation, the administration of calcitriol alone demonstrated a suppressive effect on cervical tumors in mice. However, there was no evidence indicating that calcitriol enhanced the therapeutic efficacy of radiation treatment 40 . There are few animal studies on the effects of VD on cervical cancer, but the current study found that VD may possess anti-carcinogenic properties in the context of cervical cancer, which warrants further investigation to elucidate the underlying mechanisms 40 . VDR and cervical cancer The literature on the relationship between cervical cancer and the VDR is limited. A review has reported the role of VDR upregulation in gynecological cancers, with elevated expression levels of VDR in comparison to normal tissues observed in endometrial cancer, ovarian cancer, cervical cancer, and vulvar cancer 9 . Contrasting the differences in VDR expression levels in invasive breast cancer tissues, a cohort study has revealed a negative correlation between VDR expression levels and the degree of tumor malignancy. That is to say, as the degree of malignancy worsens, VDR expression levels decrease 41 . High VDR expression was inversely associated with the malignancy of prostate cancer and the risk of cancer-related mortality 42 . Another cohort study indicated that increased VDR expression was associated with improved overall survival rates, suggesting that VDR levels could serve as a prognostic marker 43 . However, Friedrich et al. did not find a significant association between VDR expression levels and cervical cancer pathological staging, differentiation, and lymph node metastasis 44 , 45 . This disparity might arise from the limited small size of 50 cervical cancer tissues and 15 benign cervical tissues in the study 44 . In a review study, combining VDR agonists with standard treatment modalities like aromatase inhibitors has been proposed to enhance the treatment response in breast cancer 46 . Therefore, further in-depth investigation is required to elucidate the role of VDR upregulation in cervical cancer progression, prognosis and response to treatment (Table 4 ). Mechanisms underlying the VD-VDR signaling in cervical cancer While epidemiological studies have demonstrated associations between VD, VDR and a reduced risk of cervical cancer, and certain mechanisms have been investigated, the precise mechanisms by which VD and VDR influencing cervical cancer remain incompletely understood and warrant further exploration. Regulation of gene expression by the VD-VDR signaling Several studies have reported that the VD-VDR signaling in cervical cancer cells can directly impact the expression of key oncogenes and tumor suppressor genes 47 , 48 . For instance, VDR activation has been shown to downregulate oncogenes like human cervical cancer oncogene-1 (HCCR-1) and upregulate tumor suppressors like p21 and p53, leading to cell cycle arrest and inhibition of cell proliferation 47 , 49 . Wang et al. conducted research revealing that calcitriol induced cell cycle arrest in the G 1 phase, thereby inhibiting the proliferation of HeLaS3 cells, through the down regulation of HCCR-1 expression, concomitant with an increase in the expression and promoter activity of p21 47 . Notably, p21 is known to interact with p53, leading to the inhibition of cell proliferation 49 . These findings have suggested that VD may play a crucial role in inhibiting the HCCR-1 oncogene of cervical cancer and potentially act as an anti-cervical cancer agent. Furthermore, the VD-VDR signaling also influences the expression of genes related to immune responses. It can upregulate genes involved in antigen presentation, such as major histocompatibility complex (MHC) class II molecules, enhancing the immune recognition of cancer cells 50 , 51 . Additionally, VDR can downregulate genes associated with inflammation, thus reducing the pro-inflammatory environment within the tumor microenvironment 52 . Previous research also indicated that VDR regulated genes involved in DNA repair pathways. Activation of VDR has been shown to reduce nitrosylation of DNA repair enzymes, thereby maintaining genomic stability 53 . Regulation of cellular processes by the VD-VDR signaling In cervical cancer, the VD-VDR signaling has been shown to exhibit inhibitory effects by regulating various cellular processes 47 , 54 - 58 , such as cell proliferation, differentiation, and apoptosis. Several studies have investigated associations between VD, VDR and cervical cancer in vitro studies 47 , 54 - 58 . Calcitriol was found to induce HeLa cells to stop in G 0 /G 1 phase and inhibited proliferation, by using the cell counting kit-8 assay and flow cytometry 47 . Conversely, when employing the ki67 nuclear antigen method and flow cytometry, it was found that cholecalciferol had no effect on SiHa and Caski cells proliferation. Instead, it led to an increased in the sub-G 1 phase, while no effect was observed on the G 0 /G 1 phase 54 , 55 . Furthermore, 25-hydroxycholecalciferol had no effect on SiHa cell proliferation by the ki67 nuclear antigen method. However, it did induce an augmentation in the sub-G1 phase, albeit without instigating cell cycle arrest as elucidated through flow cytometry analysis 57 . These findings lead to a plausible hypothesis that the two activation pathways of cholecalciferol in vivo , mediated through the liver and kidney, may have specific impacts on cellular behavior. This conjecture is supported by the observed differential effects of various forms of vitamin D on cell proliferation and the cell cycle. Moreover, the VD-VDR signaling plays an important role in activating both intrinsic and extrinsic apoptosis pathways, which highlights its potential as a therapeutic target for inducing programmed cell death in cervical cancer cells. The VD-VDR signaling has been found to activate the intrinsic apoptosis pathway in cervical cancer cells. This involves the release of pro-apoptotic factors from the mitochondria, such as cytochrome c, which triggers the formation of the apoptosome and subsequently leads to caspase activation and apoptosis 59 , 60 . In addition to the intrinsic pathway, the VD-VDR signaling can also influence the extrinsic apoptosis pathway. Studies have shown that VDR activation can upregulate death receptors like Fas and Fas ligand (FasL), leading to the activation of caspase and initiation of extrinsic apoptosis 61 , 62 . While these findings listed above are often based on specific cancer cell lines, it's critical to acknowledge that the response to VDR activation may differ among various cervical cancer subtypes, and extrapolating these results to clinical setting requires caution. The VD-VDR signaling can also modulate the expression of Bcl-2 family proteins, which are critical regulators of apoptosis. VDR activation has been reported to decrease the expression of anti-apoptotic proteins like Bcl-2 and increase the expression of pro-apoptotic proteins like Bcl-2-associated X protein (Bax), tilting the balance in favor of apoptosis induction 63 , 64 . However, it's important to note that the interplay between these pathways can be complex and context-dependent, involving multiple factors and feedback loops. Epigenetic changes and the VD-VDR signaling Epigenetic alterations, including DNA methylation, can affect VDR expression and activity. Studies have shown that promoter hypermethylation of the VDR gene can lead to reduced VDR expression in cervical cancer cells. This epigenetic silencing of VDR can impair its tumor-suppressive functions, including the regulation of gene expression and apoptosis 65 . Additionally, epigenetic changes can also affect the expression of genes involved in cervical cancer pathogenesis and progression. For example, DNA methylation of tumor suppressor genes can lead to their silencing, promoting uncontrolled cell growth. VDR signaling may play a role in regulating the epigenetic status of some of these genes 63 . However, it's essential to recognize that epigenetic modifications are highly context-dependent and can vary among individuals. Moreover, while altered VDR expression due to methylation changes is observed, the downstream effects on gene expression and apoptosis may differ among cervical cancer subtypes. Epigenetic modifications of histones, such as histone acetylation and methylation, can also influence VDR-mediated effects. Altered histone marks at VDR target gene promoters can impact VDR binding and subsequent gene regulation 66 , 67 . This highlights the intricate interplay between epigenetic modifications and the VD-VDR signaling in cervical cancer. Nevertheless, the exact mechanisms by which histone modifications interact with VDR signaling in cervical cancer cells remain an active area of research. The specificity and dynamics of these interactions need further elucidation. Interactions with other signaling pathways VD and Ether à go-go potassium channels The Ether à go-go (EAG) family of potassium channels has been implicated in promoting cancer cell proliferation and is expressed in a variety of cancer types 68 , 69 . EAG1, a member of this family 70 , is expressed at low levels in normal tissues but is significantly overexpressed in various cancer types. Inhibition of EAG1 expression has been shown to reduce cell proliferation 71 . Notably, EAG1 expression has been detected in cervical cancer tissues and cell lines, with increased expression as the severity of CIN increases 68 , 72 . Previous research has demonstrated that calcitriol can downregulate the expression of EAG1 mRNA and protein in SiHa cells, leading to reduced proliferation. Moreover, this effect was more pronounced in cells transfected with VDR expression vectors, indicating that VDR played a crucial role in mediating the inhibition of EAG1 gene expression 73 . Further studies on SiHa and C33A cells have revealed that calcitriol inhibited EAG1 gene expression at the transcriptional level, with the involvement of VDR 74 . Various studies have reported that estrogen could play a role as a co-factor in the increased risk of cervical cancer in women with HPV DNA + 75 . Specially, Díaz et al. discovered that the expression of estrogen receptor-α (ERα) led to a strong upregulation of EAG1 expression in HeLa cells 76 , in response to both estradiol and anti-estrogen. Conversely, another study found that estradiol, through G protein-coupled receptor 30 (GPR30), rather than ERα, contributes to the destabilization of genome structure in HPV-infected cells, potentially promoting carcinogenesis 77 . This discrepancy may be due to different concentrations of estradiol, resulting in distinct mechanisms of action, which may be explained by the dual effects of estrogen concentrations. Moreover, Dupuis et al. confirmed that estrogen had the capacity to enhance the effect of VD by facilitating the expression of VDR. VD, on the other hand, can down-regulate aromatase expression, reducing the level of estrogen 78 . Consequently, as subsequent studies unfold, the determination of the optimal estrogen concentration may have substantial significance for influencing VD in cervical cancer. In previous studies, the HPV oncoproteins E6 and E7 caused the loss of cell protein p53 and retinoblastoma protein pRb 79 . It is noteworthy that EAG1 may be down-regulated by p53 and pRb pathways 76 , indicating a potential synergistic effect of estrogen and HPV on EAG1 expression. The HERG potassium channel, a member of the EAG potassium channel family, has been identified in various cancer types, and HERG mRNA has been detected in HeLa cells 70 . Suzuki et al. have observed the expression of the HERG gene in C33A cells, and HERG channel inhibitors have been shown to reduce the G 2 /M phase cell ratio 80 . Furthermore, VD has been reported to up-regulate tumor necrosis factor α (TNF-α) in cancer cells 81 , leading to an increase in intracellular reactive oxygen species 82 . Nevertheless, it is noteworthy that reactive oxygen species may increase the outward current of HERG potassium channels, while its scavengers can reduce the outward current at rest 83 . Therefore, it is hypothesized that VD may have a modulatory effect on the HERG potassium channel. Wnt/β-catenin pathway The abnormal activation of the Wnt/β-catenin signaling pathway is implicated in irregular cell proliferation and differentiation, which can contribute to tumorigenesis. Studies have revealed an elevated rate of abnormal β-catenin protein expression along with the progression of CIN and the development of cervical cancer 84 . Notably, HPV E6 and E7 oncoproteins have been shown to up-regulate β-catenin expression, activate the regulated Wnt pathway, and thereby promote cervical cancer progression 85 , 86 . In related research, VD has exhibited inhibitory effects on the progression of various cancer, such as melanoma 87 , Kaposi's sarcoma 88 , oral squamous cell carcinoma 89 , and ovarian cancer 90 , by modulating the Wnt/β-catenin signaling pathway through its interaction with VDR. Therefore, it is reasonable to speculate that VD may inhibit the progression of cervical cancer by binding with VDR, potentially implicating the Wnt/β-catenin signaling pathway as a target for intervention. PI3K-AKT pathway and PI3K-AKT- mTOR pathway The phosphatidylinositol 3-kinase/Akt (PI3K/AKT) pathway has the capacity to modulate the expression of key oncogenes (e.g. HCCR-1) and tumor suppressor genes (e.g. p53) 91 and elicit the transition between epithelial and stromal tissues 92 , thereby serving as a facilitator in the onset and advancement of cervical tumors. While studies have demonstrated VD's ability to impede Non-Small-Cell Lung Cancer progression through the PI3K/AKT pathway 93 . This pathway's involvement in cervical cancer remains unexplored. However, the association between this pathway and HCCR-1 94 , which is associated with cervical cancer progression, strengthens the hypothesis that the PI3K/AKT pathway may be instrumental in mediating VD's effects on cervical cancer. The PI3K-AKT-mTOR pathway is an extension of the PI3K/AKT pathway and includes the mammalian Target of Rapamycin (mTOR) as a key downstream component. The extended pathway is known to regulate essential cellular functions including proliferation, differentiation, and apoptosis 95 . In various cancer contexts, VD has been shown to engage with VDR and inhibit tumor progression through the PI3K-AKT-mTOR pathway, as observed in Kaposi's sarcoma cells 96 and non-small cell lung cancer 97 . Aberrant activation of the PI3K-AKT-mTOR pathway has been documented in cervical cancer 98 . Notably, VD has demonstrated the ability to impede the growth of HeLa cells by suppressing autophagy and altering mitochondrial homeostasis through modulation of the PI3K-AKT-mTOR pathway 58 . However, it is crucial to highlight that studies on the PI3K-AKT-mTOR pathway have primarily focused on HeLa cells and has not encompassed other cervical cancer cell lines, such as Caski, SiHa and C33A cells. Therefore, it remains inconclusive whether the observed effects of VD are attributed to specific characteristics of HeLa cells, such as their HPV typing. It is crucial to acknowledge that the development and progression of cervical cancer involve a complex interplay of multiple factors, and the VD-VDR signaling represents one aspect of this complexity (Figure 2 ). Thus, the translation of these findings into clinical applications may face challenges related to the specificity, safety, and potential side effects of VDR-targeted therapies. VDR gene polymorphism and cervical cancer The antitumor effect of VDR is influenced by gene polymorphism, which can affect the activity of the VD-VDR complex 99 , 100 . Certain gene polymorphisms may reduce VDR activity and responsiveness to calcitriol, thereby leading to the progression of cervical cancer. Genetic polymorphic loci such as APa1, Bsm1, Taq1, Fok1, Cdx2 have been extensively studies in the context of tumors 101 . Investigations have shown that VDR gene polymorphism is associated with ovarian cancer (Fok1, Apa1), breast cancer (Bsm1, Fok1) and other tumors 102 . Phuthong et al. have detected VDR polymorphism (Fok1, Apa1, and Taq1) in 204 patients with cervical squamous cell carcinoma and 204 healthy controls matched by age, and found that Taq1 was associated with cervical cancer in northeastern Thailand, and Taq1 and Fok1 may interact to affect the development of cervical cancer but no association in APa1 103 . Meanwhile, Li et al. found that Fok1 and Taq1 polymorphisms were associated with an increased risk of CIN2+ (CIN2, CIN3 and cervical cancer) within the Shanxi population 104 . It is imperative to acknowledge certain limitations in these studies, such as their exclusive focus on cervical squamous cell carcinoma 103 , or their restriction to HPV16+ CIN2+ patients, without distinguishing between CIN and cervical cancer patients 104 . Notably, these studies did not consider gene-environment interactions, for example, skin color may affect the process of skin synthesis of VD under light, and circulating VD levels may interact with the VDR gene variants to have an impact on cervical cancer risk. Future research should expand the scope of investigation to include other subtypes of cervical cancer, and consider the intricate interplay between genetic and environmental factors. Such endeavors will undoubtedly advance our understanding of the association between VDR gene polymorphism and cervical cancer. Persistent HPV infection is linked to the impairment of immune function. VDR is expressed in multiple cell types, including immune cells 31 , indicating a potential association between VDR and HPV infection. Previous studies have suggested that VDR gene polymorphism is related to viral infection 105 , and the polymorphism of the VDR gene may influence the effect of VD by affecting its activity and expression. Nevertheless, to date, empirical examinations that examine the relationship between VDR gene polymorphism and HPV infection remain conspicuously absent from the extant literature, thereby warranting consideration as a prospective avenue of research in forthcoming investigations. Interaction of VD with calcium and other vitamins Although vitamins and minerals are micronutrients, their impact on human physiological processes is profound. Minerals and vitamins, such as calcium (Ca), vitamin A (VA), vitamin B (VB), vitamin C (VC), VD, vitamin E (VE) and vitamin K (VK), have been reported as potential preventive measures against the progression of cervical cancer 9 , 106 - 110 . Studies have highlighted the potential significance of VD in impeding cervical cancer progression, warranting further exploration of the interactions of VD with Ca and other vitamins. VD has been recognized for its traditional role in the regulation of calcium and bone metabolism. VD may collaborate with calcium by regulating calcium metabolism in the body. Exploring the synergistic effects of Ca and VD in cervical cancer holds therapeutic promise, particularly considering the significant role of Ca signaling channels in cervical cancer 111 . VA plays an anti-cancer role through its antioxidant capacity 112 . Retinoic acid, a metabolite of VA in the body, exerts its function through the retinoic acid receptor (RAR)-RXR heterodimer, formed by RXR and the nuclear receptor of the RAR family. This complex interacts with retinoic acid response elements (RAREs) within the promoters of retinoic acid-responsive genes 113 . Given that both VA and VD interact with RXR, the possibility of antagonistic effects arises. Epidemiological studies have shown that VA levels in both dietary intake (OR=0.59 95%CI 0.49-0.72) and blood (OR=0.60 95%CI 0.41-0.89) are inversely associated with cervical cancer risk 114 . Moreover, VA has been shown to inhibit the transcription level of HPV oncogenes 115 . Therefore, it is reasonable to speculate that VA can affect EAG1 by affecting HPV oncogenes, but the competitive relationship of RXR also needs to be considered. Folate, a member of the B vitamin group, mainly uses three folate transporters for cellular uptake: the reduced folate carrier (RFC), the proton-coupled folate transporter (PCFT) and folate receptors (FOLRs). Studies have suggested that VD 3 and its receptors can increase the expression of PCFT, thus increasing folate intake 116 . However, conflicting findings exist, with some studies reporting no effect of VD on folate levels in the body 117 . The effect of VD on folic acid metabolism is still controversial, potentially involving undiscovered mechanisms warranting further investigation. Notably, folate receptor ɑ (FRɑ), one of the FOLRs, displays increased expression with the progression of cervical lesions and is highly expressed in cervical cancer 118 , 119 . FRɑ exerts its influence on p53 and p21 through the ERK1/2 signaling pathway 118 . Given that VD can also influence p53 and p21 through various mechanisms, it is reasonable to postulate potential interactions between folate and VD in the context of cervical cancer, although over-supplementation of folate may have a positive effect on precancerous changes 120 . VC exerts its anti-cancer effects by acting on redox imbalances, epigenetic reprogramming an oxygen-sensing regulation of cancer cells 121 . Patients with cervical cancer have been reported to exhibit lower levels of VC in both blood levels and dietary intake compared to normal controls 122 - 124 . VC can potentially impact cervical cancer through pathways involving TNF-α and p21 125 , which may interact with the anti-cancer pathway of VD. VE hinders cancer progression by promoting apoptosis and inhibiting cell proliferation and invasion 126 . Epidemiological studies have shown that VE levels in both dietary intake (OR=0.68 95%CI:0.49-0.94) and blood (OR=0.52 95%CI:0.40-0.69) are inversely associated with cervical cancer risk 127 . VE consists mainly of tocopherols and tocotrienols, with tocopherol notably inhibiting the AKT signaling pathway to exert its anticancer effects 126 . Tocotrienol, another component of VE, can inhibit cell proliferation through several pathways, including the PI3K/AKT pathway, the Wnt/β-catenin pathway and TNF-ɑ 128 . The potential relationship between VE and the action pathway of VD suggests the possibility of a synergistic role against cervical cancer. VK can promote apoptosis, induce cell cycle arrest and overcome drug resistance by inhibiting P-glycoprotein, offering potential value when combined with VD in the treatment of cervical cancer 129 . A cross-sectional survey of more than 10,000 women in the United States suggests an association between VK intake and HPV infection, although the relationship appears non-linear 130 . VK can produce reactive oxygen species, which leads to apoptosis 131 , 132 . Given that VD also influences reactive oxygen species, the potential synergy in inhibiting cervical cancer progression warrants consideration.
Funding This study was supported by the National Natural Science Foundation of China (grant number:82003440, 82173519, 72174144). The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication. Author contributions Xu-mei Zhang, Juan Xie and Zhuo-yu Sun designed this conceptualization. Han-yu Dong and Shi-yue Chen wrote, reviewed and revised of the manuscript. Qi-liang Cai and Xiao-shan Liang provided materials and technical support. All authors contributed to critical revision of the manuscript for important intellectual content and gave final approval. Abbreviations cervical intraepithelial neoplasia high-risk human papillomavirus human papillomavirus vitamin D vitamin D receptor ultraviolet B retinoid X receptor risk ratio confidence interval odds ratio quartile hazard ratio human cervical cancer oncogene-1 major histocompatibility complex Fas ligand Bcl-2-associated X protein Ether à go-go estrogen receptor-α G protein-coupled receptor 30 tumor necrosis factor α phosphatidylinositol 3-kinase /Akt mammalian target of the rapamycin calcium vitamin A vitamin B vitamin C vitamin E vitamin K retinoic acid receptor retinoic acid response elements reduced folate carrier proton-coupled folate transporter folate receptors Folate receptor α
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):926-938
oa_package/fa/a1/PMC10788714.tar.gz
PMC10788715
0
Introduction Gastric cancer (GC) remains a major cancer worldwide, accounting for over one million new cases in 2020 and an anticipated 769,000 deaths, ranking fourth for mortality and fifth for incidence globally 1 . Surgery and chemotherapy remain the most effective treatment modality and gene therapy have seen significant advancements in the past decade 2 . However, due to the low rate of early detection and diagnosis, many patients with gastric cancer have very poor survival, and the overall 5-year survival rate of GC patients is less than 50% in China 3 , 4 . Gastric cancer development is a complicated process that involves many genetic and epigenetic changes in oncogenes, tumor suppressor genes, DNA repair genes, cell cycle regulators, and signaling molecules 5 . Therefore, it is crucial to elucidate the pathogenesis of gastric cancer by identifying novel targets for treatment. MicroRNA (miRNA), 18-22 nucleotides in length, is a class of small noncoding RNA molecules with highly conserved. It is generally considered that miRNAs influence gene expression at the transcriptional level via binding to mRNA 6 . Much evidence demonstrates that miRNAs correlate with various tumors, such as breast cancer, lung cancer and ovarian cancer 7 - 9 . It contributes to cancer through a nuclear function that affects gene transcription and epigenetic states 10 . Importantly, miRNA is also associated with gastric cancer progression. MiR-34a inhibits the proliferation and invasion of gastric cancer by regulating PDL1 11 . MiR-1262 inhibits gastric cardia cancer by targeting the oncogene ULK1 12 . MiR-542-3p suppresses cell proliferation by targeting the oncogene astrocyte-elevated gene-1 13 . This evidence shows that miRNAs involved in gastric cancer progression. Our previous work demonstrated that miR-766-3p plays a kernel role in GC by identifying the differentially expressed (DE) miRNAs of GC 14 . The study shows that the miR-766-3p contributes to anti-inflammatory responses and stops the inflammatory carcinoma transformation by inhibiting NF-κB signaling indirectly 15 . It has also been demonstrated to target a variety of oncogenes, such as hepatocellular carcinoma, colorectal cancer, renal cell carcinoma and thyroid carcinoma. The miR-766-3p/FOSL2 axis plays an oncogenic role in hepatocellular carcinoma 16 . It inhibits proliferation in colorectal cancer cells via the PI3K/AKT pathway when HNF4G is down-regulated 17 . MiR-766-3p also targets and inhibits SF2 expression and promotes the proliferation of renal cell carcinoma cells 18 . Circ_0059354 accelerates the growth of papillary thyroid cancer by increasing ARFGEF1 levels via miR-766-3p sponging 19 . These studies have shown that miR-766-3p plays an important role in cancer. However, few studies have been conducted to explore the mechanisms of miR-766-3p in gastric cancer. In this paper, the core targets and signal pathway of miR-766-3p in gastric cancer was further analyzed. We identified COL1A1 as the core target of miR-766-3p through database prediction and screening. The binding sites between miR-766-3p and COL1A1 were verified using dual luciferase Assay. Biological Functional Analysis was used and found that COL1A1 correlates with PI3K/AKT signal pathway. Experiments were used to validate the conclusions of the data analyses.
Materials and methods Patients and samples From July 2022 to May 2023, a total of 60 clinical tissue samples (30 tumor samples and 30 adjacent normal samples) were collected from patients at Yueyang Hospital of Integrated Traditional Chinese and Western (Table 1 ). Inclusion criteria: (1) Clinical specimens were confirmed to be gastric cancer by histopathological examination and there was at least one solid or measurable extra-gastric lesion; (2) 18 years old ≤ 85 years old; (3) Eastern Cooperative Oncology Group (ECOG) score of 0-2; (4) Patients or authorized relatives of the patients signed the informed consent before enrolment. Exclusion criteria: (1) patients who have received radiotherapy or chemotherapy; (2) history of other tumors within 5 years; (3) patients in pregnancy or lactation; (4) combined organ failure or other serious diseases; (5) combined neurological or psychiatric history. And paraneoplastic tissue is taken from at least 4 cm away from the tumor lesion. All tissue samples were immediately frozen and preserved in liquid nitrogen until further use. Samples were then kept at -80°C for RNA protein extraction. MiR766-3p targets prediction and enrichment analysis The miRNA target genes were predicted using two databases involving TarBase ( http://www.diana.pcbi.upenn.edu/tarbase ) (v8.0) and TargetScan ( http://www.targetscan.org/vert_72/ ), which contained the largest collection of manually curated experimental data. The signal pathway of the hub target was enriched using DAVID ( https://david.ncifcrf.gov ) online. Differentially expressed genes (DEGs) identification The mRNA dataset (GSE118916) of gastric cancer was obtained from the GEO database ( http://www.diana.pcbi.upenn.edu/tarbase ), which includes 30 GC tissue samples and 30 normal samples. The mRNA levels of all samples were standardized using DESeq software, and the difference significance test of reads was performed using NB (Negative binomial distribution). R package was used to identify the differentially expressed genes (DEGs) between the Tumor Group and Normal Group (fold change > 1.5, P < 0.05). Protein-protein interaction (PPI) network construction The DEGs were used to construct a PPI network by the STRING (v11.5), and CytoHubba clusters were used to gain hub genes. The determinate nodes were considered as potential hub mRNAs, and they will be further validated in database and experiments. Core genes validated by databases Data from the TCGA (The Cancer Genome Atlas) and the GTEx (Genotype-Tissue Expression) projects were used to conduct expression and survival analyses for possible indicators. The Human Protein Atlas (HPA) database ( https://www.proteinatlas.org ) was then used to obtain core proteins immunohistochemistry results between stomach glandular cells and tumor cells. Dual-luciferase reporter assay The binding sites of miR-766-3p and COL1A1 were calculated and predicted using miRanda (v3.3). The wild-type and mutant plasmids of COL1A1 (psiCHECK-2-WT-COL1A1 3′UTR, psiCHECK-2-MUT-COL1A1 3′UTR) were constructed and provided by Yilaibo ( http://www.shyilaibo.com ). Luciferase activity was assessed with a dual luciferase kit (E1901; Promega, USA) 48 h after co-transfection of each plasmid with miR-NC or miR-766-3p. qRT-PCR validation Total RNA was isolated using FreeZol Reagent 200rxns (R711-01, Vazyme). Reverse transcription was operated using a miRNA 1st Strand cDNA Synthesis Kit (MR101-01, Vazyme), and HiScript II Q RT Super Mix for qPCR (R223-01, Vazyme). Then, using the miRNA Universal SYBR PCR MasterMix (MQ101-01, Vazyme) and ChamQ SYBR qPCR Master Mix (Q321-02, Vazyme), the levels of miRNA and mRNA expression were determined. GAPDH and U6 were employed as endogenous controls for mRNA and miRNA expression levels, respectively. Finally, the relative RNA expression levels were determined using the 2 -ΔΔCt technique. Table 2 displays the primer sequences. Western blot RAPI reagents (epizyme) were used to extract total proteins. Protein samples were loaded and separated by 7.5% SDS-PAGE before being transferred to NC membranes and blocked at room temperature for 1 hour. The membranes were washed five times with TBST before being incubated overnight at 4°C with anti-COL1A1 (1:1000, 72026T, CST) and anti-GAPDH (1:1000, 5174S, CST). The membranes were then treated for 1 hour at room temperature with horseradish peroxidase (HRP) conjugated second antibodies (Proteintech). In a dark chamber, the films were developed and fixed with an ECL solution. Image J 1.2.4 (NIH, USA) was used to semi-quantify protein expression. Statistical analyses All variables are provided as mean SD and statistical analysis was conducted using the SPSS 23.0 program (IBM Analytics). The groups were compared using two-tailed Student's t-tests, and a P value of 0.05 was regarded as statistically significant. Plotting was done using GraphPad Prism 8 software (GraphPad Software Inc., San Diego, CA, USA).
Results Potential targets of miR-766-3p related to gastric cancer Using the prediction programs, the miR-766-3p target genes were predicted. As a result, a total of 7088 targets were collected from Tarbase and TargetScan. Using the R package, 93 DE mRNAs were accessed by analyzing 30 gastric cancer clinical microarray data from the GEO database. In this work, we intersected the 93 DE mRNAs with 7088 predicted targets and required target expression abundance greater than 0.5. Following the preceding stages, 18 candidate targets were obtained eventually (Fig. 1 A). PPI network construction and hub mRNAs screening Based on the STRING database, the PPI network was constructed by 18 DE mRNAs (Fig. 1 B). We employed the CytoHubba cluster (Top5) to screen the key mRNA of the PPI, and five core targets were eventually selected (THBS2, COL1A, FGG, FGB, PLAU) (Fig. 1 C, Table 3 ). Validation core markers based on TCGA and HPA database Gastric cancer (n = 408) and normal (n = 211) expression of five key genes was assessed using GEPIA, respectively. The result revealed that the expression of THBS2, COL1A, FGG, and PLAU was considerably different (Fig. 2 A). To further analyze the levels of 4 targets, we used the HPA database to validate the tissue expression in immuno-histochemistry. Only one target showed variations in expression between tumor and normal tissues (Fig. 2 B). The COL1A1 was negative in normal tissues but strongly expressed in tumors, and immuno-histochemistry revealed no differences in the expression of other targets. In addition, we investigated the prognostic values of COL1A1 in gastric cancer patients based on the overall survival (OS) calculation. According to the findings, the high mRNA levels of COL1A1 have statically significant (P = 0.014) of OS in gastric cancer patients (Fig. 2 C). Other targets had no impact on overall survival. Based on the confirmation of the data presented above, it is possible to conclude that COL1A1 is very significant to gastric cancer and is an essential target for miR-766-3p. Dual-luciferase reporter assay To verify the binding ability of miR-766-3p to COL1A1, a dual luciferase reporter assay was applied. The results showed that miR-766-3p significantly inhibited the luciferase activity of psiCHECK-2-WT-COL1A1, whereas miR-NC did not inhibit psiCHECK-2- MUT-COL1A1 (Fig. 3 A-C). Validation core target and signal pathway by qRT-PCR and Western Blot We analyze the downstream pathway of COL1A1 by the Davide database and get many signal pathways (Fig. 4 A). Of them, PI3K/AKT signal pathway is a fundamental one, with frequent oncogenic alterations in GC 20 . Thus, we detected expression levels of miR-766-3p, COL1A1and PI3K/AKT of 30 patients' stomach carcinoma tissues and adjacent normal tissues via RT-qPCR and Western Blot. As a result, we found COL1A1 had a significantly higher expression level in tumor tissues than in normal ones, while miR-766-3p, conversely, had a significantly lower expression level in tumor tissues (Fig. 4 B, C). Regarding the PI3K/AKT signal pathway, PI3K is low-expressed and AKT is over-expressed in gastric cancers, and there is negative regulation between them (Fig. 4 D, E). At the level of protein expression as shown in Fig. 3 F. COL1A1 also differs significantly between tumor and normal tissues (Fig. 4 G). Relative quantitative protein expression levels of PI3K/AKT are consistent with the mRNA level (Fig. 4 H, I).
Discussion MiRNAs as a class of regulatory factors involved in tumor regulation, and many studies show that miRNAs influence tumor progression by performing many functions, including cell division, cell differentiation, angiogenesis, migration, apoptosis and oncogenesis 21 , 22 . For example, by directly binding to the PTEN, miR-21 may increase the proliferation, invasion, and migration of GC cells 23 . And miR-21 could promote GC via activating the PI3K/AKT pathway 24 . In GC cells, miR-375 is dysregulated, which promotes the growth of the PI3K/Akt pathway and cell survival 25 . In our previous study, by creating the circRNA-miRNA-mRNA (CMM) network and the protein-protein interaction (PPI) network, we discovered that miR-766-3p plays a crucial role 14 . However, the mechanism of miR-766-3p in GC is unclear. In this study, the potential targets of miR-766-3p were predicted and screened by using Tarbase and Targetscan databases, and COL1A1 was identified as the core target of miR-766-3p by combining TCGA and HPA databases. The qRT-PCR and Western Blot also demonstrated that the miR-766-3p and COL1A1 affect the progression of GC. According to bioinformatic analysis, COL1A1 is a hydrophilic, negatively charged secreted protein that is crucial for the formation of collagen structures and cell adhesion 26 . As we all know, COL1A1 has been found to be elevated in many cancers and affects various signal pathways, such as gastric, colorectal, breast and thyroid tumors 27 - 31 . And miRNA-98 regulation has been demonstrated to cause lower COL1A1 mRNA levels 32 . Interestingly, our study also found that miR-766-3p can downregulate the high level of COL1A1 in GC. The enrichment analysis revealed that the COL1A1 was highly correlated with PI3K/AKT signal pathway, and it promoted the activation of PI3K/AKT 33 . In GC, the PI3K/AKT signal pathway plays an important role. For example, SLC1A3 through the PI3K/AKT pathway to hasten the growth of gastric cancer 34 . LGR6 may accelerate the development of GC via the PI3K/AKT/mTOR pathway 35 . The activation of NF-B and the PI3K/AKT/SP1 axis is necessary for the UBAP2L-induced EMT process in GC cells 36 . Gastric cancer is prevented from developing and metastasizing by BFAR via the PI3K/AKT/mTOR signal pathway 37 . These studies show that the PI3K/AKT signaling pathway influences the progression of GC. Furthermore, multiple pieces of evidence have demonstrated that miRNAs can regulate PI3K/Akt signal directly and partially identified the processes behind their oncogene or tumor-suppressor functions in GC. MiR-196b was found to accelerate GC tumor growth by promoting cell cycle and cell proliferation, possibly by activating the PI3K/Akt/mTOR pathway 38 . On the other hand, miR-181d and miR-203 were found to reduce GC cell growth by targeting PIK3CA and, as a result, attenuated Akt activation 39 . It indicates that the PI3K/AKT signal pathway is involved in the occurrence and progression of gastric cancer. In summary, we demonstrate that miR-766-3p is under-expressed in the tumor tissues of gastric cancer patients. The low expression of 766 was significantly associated with advanced TNM stage, primary tumor and lymph nodes ( Fig. S1 , Table S2 ). COL1A1 is an important target of miR-766-3p and is inversely correlated with the expression of miR-766-3p. The luciferase Assay indicated that miR-766-3p suppresses gastric cancer through directly targeting the 3'-UTR of COL1A1 mRNA and down-regulating the levels of COL1A1. It can also participate in the progression of gastric cancer by influencing the PI3K/AKT signal pathway. Meanwhile, COL1A1 is one of the main components of the extracellular matrix (ECM). ECM is also the key component of the tumor micro-environment (TME), which has complementary effects on the development and metastasis of tumors in diverse ways 40 . MiR-766-3p/COL1A1/PI3K/AKT may inhibit gastric cancer progression through suppressing ECM, metabolism, cell cycle progression and cell survival (Fig. 5 ). Our findings suggest a potential molecular basis for the genesis and progression of gastric cancer, which may lead to new approaches to diagnostics and treatment. In the future, it may represent a potential treatment strategy against the GC.
* These authors contributed equally to this work and should be considered co-first authors. Competing Interests: The authors have declared that no competing interest exists. Objective MiRNA-766-3p has been shown to be associated with a variety of cancers. However, few studies have been done in gastric cancer (GC). This study explores the mechanism of miR-766-3p in GC. Methods The potential targets of microRNA (miRNA) were predicted using Tarbase and Targetscan databases. The results are intersected with differential genes (DEGs) (fold change > 1.5, P < 0.05) in gastric cancer to obtain potential core targets. The hub targets screened by constructing PPI networks (degree > 5, expression > 0.5). Validating the differential expression and expression in immunohistochemistry of these targets through the database. And the binding sites between miRNAs and mRNAs were verified using dual-luciferase Assay. Finally, qRT-PCR and Western Blot experiments were conducted to validate the hub targets and signal pathways. Results The potential hub targets from the PPI network were THBS2, COL1A1, FGG, FGB, and PLAU. Combining database, luciferase Assay and experimental validation, miR-766-3p can sponge COL1A1 and it plays the most important role in gastric cancer progression. In GC, COL1A1 was upregulated and the enrichment analysis revealed that COL1A1 regulates PI3K/AKT signal pathway, and AKT is also highly expressed in gastric cancer. Conclusion The miR-766-3p can inhibit the progression of gastric cancer by targeting COL1A1 and regulating the PI3K/AKT signal pathway. It could be a potential therapy option for the GC.
Supplementary Material
Funding This study was supported by the Natural Science Foundation of Shanghai Science and Technology Commission (No. 20ZR1459300), the Key Program of Yueyang Hospital of Shanghai University of Traditional Chinese Medicine (No.2019YYZ01), the Special Program of Yueyang Hospital of Shanghai University of Traditional Chinese Medicine (No. 2021yygm06), the Peak Disciplines (Type IV) of Institutions of Higher Learning in Shanghai, the Zhu Shengliang National Famous Elderly Chinese Medicine Experts Inheritance Workshop Construction Project, National Chinese Medicine Human Education Letter [2022] No. 75, Shanghai Municipal Hospital Gastroenterology clinical competence improvement and advancement specialist alliance(SHDC22021311) and Shanghai Yueyang Hospital TCM Speciality Construction Project (YW (2023-2024)-01-08). Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. Ethics approval All procedures performed in studies involving human and animal participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. The study was approved by the Bioethics Committee of the Medical University of Yueyang Hospital of Integrated Traditional Chinese and Western. Author contributions Hongmei Ni, Shengquan Fang, and Qilong Chen designed, conceptualized and supervised the research; ShengHu and Caiyun Zhang analyzed the data; Yue Zhou, Ming Han, Jingjing Li and Fulong Li provided clinical data; Yujie Ding and Mengyuan Zhang performed the experiments, analyzed the data, made the figures and wrote the first draft. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Consent to participate Informed consent was obtained from all individual participants included in the study. Abbreviations gastric cancer microRNA differential genes The Cancer Genome Atlas Genotype-Tissue Expression Human Protein Atlas circRNA-miRNA-mRNA extracellular matrix tumor micro environment
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):990-998
oa_package/f5/b0/PMC10788715.tar.gz
PMC10788716
0
1. Introduction Melanoma is a type of skin cancer that develops from the pigment-producing cells known as melanocytes 1 . It is the most serious type of skin cancer and can spread to other parts of the body if not treated early. The incidence rate and mortality of melanoma are increasing year by year 2 . Melanoma treatment typically involves a combination of surgery, radiation therapy, chemotherapy, and immunotherapy 3 . However, for patients with advanced cancer, the treatment effect is poor. Seeking novel therapeutic strategy is necessary for the prevention and treatment of melanoma. Growth differentiation factor 15 (GDF15) has been shown to be upregulated in melanoma and is associated with poor prognosis 4 . It has been suggested that GDF15 may be a potential therapeutic target for melanoma 5 . Several studies have demonstrated that GDF15 is involved in the regulation of cell proliferation, migration, and invasion in melanoma cells 6 . In addition, GDF15 has been shown to be involved in the regulation of angiogenesis and immune evasion in melanoma 7 . However, the further regulation mechanism remains unknown. The PTEN/PI3K/AKT signaling pathway is a key pathway involved in the development and progression of many types of tumors 8 . The PTEN/PI3K/AKT pathway is activated when P hosphatase and ten sin homologue (PTEN), a tumor suppressor gene, is mutated or deleted 9 . This leads to increased activity of the PI3K/AKT pathway, which can promote cell proliferation, survival, and metastasis 10 . Mutations in the PTEN/PI3K/AKT pathway have been found in many types of cancer, including breast, ovarian, and prostate cancer 11 . In addition, the PTEN/PI3K/AKT pathway is also involved in the development of drug resistance in cancer cells 12 . For example, Berberine regulated the Notch1/PTEN/PI3K/AKT/mTOR pathway and acted synergistically with 17-AAG and SAHA in SW480 colon cancer cells 10 . PTEN/PI3K/Akt pathway alters sensitivity of T-cell acute lymphoblastic leukemia to L-asparaginase 13 . Therefore, targeting this pathway may be a promising strategy for the treatment of cancer. We speculate that GDF15 might regulate the development of melanoma through affecting PTEN/PI3K/AKT signaling pathway. Epithelial-mesenchymal transition (EMT) process in tumors is a process by which cancer cells undergo a transformation from epithelial cells to mesenchymal cells 14 . This process is associated with increased invasiveness and metastasis of the tumor, and is thought to be a key factor in the progression of cancer 15 . The EMT process is regulated by a variety of factors, including growth factors, cytokines, and transcription factors 16 . If GDF15 could regulate the progression of melanoma through affecting EMT process has not been reported. In this study, bioinformatics methods were performed to analyze the relationship between GDF15 expression and prognosis. Knockdown of GDF15 in M14 and M21 cell lines were constructed. The influence of GDF15 and 740Y-P on the cell proliferation, migration, invasion, and apoptosis were measured. We firstly demonstrated the regulatory role of GDF15 in malignant melanoma through targeting PTEN/PI3K/AKT pathway in both invitro and vivo levels. This study might provide a new understanding for the regulatory role of GDF15 in malignant melanoma.
2. Materials and Methods 2.1 Cell culture Cell lines HEM, UACC62, A375, M14, and M21, procured from the American Type Culture Collection (ATCC, USA), were utilized. These cells were maintained at 37°C and 5% CO2 in a humidified incubator. Dulbecco's Modified Eagle Medium (DMEM, Gibco, #12491015, Langley, OK, USA) was the culture medium of choice, refreshed bi-daily. Upon achieving 70% confluence, cells were harvested for subsequent experimental applications. For treatment, 740YP was administered at a concentration of 10 μM. 2.2 EdU staining Cells underwent fixation with polyformaldehyde for 15 minutes and subsequent rinsing with PBS (Gibco, #10010023) for five minutes. Incubation with Alexa Fluor 488-conjugated anti-EdU antibody (1:500, Thermo, #C10337, USA) in PBS lasted one hour. Further, cells were treated with DAPI dye (MERCK, #10236276001) for one minute, followed by a final PBS wash. A Leica Laser Scanning Confocal Microscope SP8 (Heerbrugg, Germany) facilitated observation, and Image J software quantified staining intensity. 2.3 Real time polymerase chain reaction (RT-PCR) RNA extraction from cells employed TRIzol reagent (Invitrogen, #15596026, USA). Reverse transcription was executed using Takara's kit (#639537, China), and the resultant cDNA amplified via qRT-PCR (SYBR green qPCR Mix, QIAGEN, #204243, USA) on a Bio-Rad CFX96 system. Primers for GDF15 and GAPDH were used, with sequences provided. The primers are listed as follows: GDF15 (F: CTCCAGATTCCGAGAGTTGC, R: CACTTCTGGCGTGAGTATCC), GAPDH (F: GTCCATGCCATCACTGCCAC, R: AAGGCTGTGGGCAAGGTCAT). 2.4 Western blotting Protein samples were isolated using RIPA buffer (Sigma, R0278, USA) with PMSF, and concentrations determined via BCA assay (Nanjing Jiancheng Bioengineering Institute, #A045-4-1, China). SDS-PAGE (10%) and subsequent wet transfer onto membranes (Milipore, GVWP02500, USA) preceded antibody incubations. Blocking utilized TBST with non-fat milk powder (Beyotime, #P0222, China), and detection was via enhanced chemiluminescence (Bio-Rad, #32106, USA). Antibodies against GDF15, β-actin, PI3K, AKT, cleaved caspase-3, Bax, N-cadherin, E-cadherin, vimentin, and GAPDH (all from abcam) were applied. GDF15 (1:1000, ab206414, abcam), β-actin (1:3000, ab5694, abcam), p-PI3K (1:2000, ab278545, abcam), PI3K (1:2000, ab302958, abcam), AKT (1:2000, ab238477, abcam), p-AKT (1:2000, ab81283, abcam), cleaved caspase-3 (1:1000, ab32042, abcam), bax (1:1000, ab32503, abcam), N-cadherin (1:2000, ab76011, abcam), E-cadherin (1:2000, ab40772, abcam), vimentin (1:1000, ab92547, abcam), GAPDH (1:3000, ab9485, abcam). 2.5 Transwell assay Cell invasion was assessed in 24-well plates using Transwell inserts (BD Bioscience, #3422, USA) with an 8.0-μm pore size, coated with matrix gel (Corning, #356255). Cells were seeded in FBS-free medium in the upper chamber, and DMEM with 10% FBS in the lower chamber. After 24 hours, cells were fixed, stained with crystal violet (Sigma-Aldrich, #C0775), and quantified using a Nikon Eclipse TE300 microscope. 2.6 Bioinformatics analysis GEPIA ( http://gepia.cancer-pku.cn/ ) and TCGA ( https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga ) and were used to analyze role of GDF15 in survival and prognosis of melanoma patients. GEPIA and TCGA data bases were used mainly to analyze the prognosis, expression of GDF15 in melanoma tissues, the relationship between GDF15 and advanced tumor stage, and the expression of GDF15 in normal skin tissues. 2.7 Cell transfection The cells were seeded in a culture dish. sh-GDF15 and sh-NC were designed and obtained from Realgene (Shanghai, China). The plasmids transfection was conducted with lipofectamine 2000 (#11668019, Invitrogen, US). The transfection effectiveness was quantified by measuring the mRNA level of GDF15 through RT-PCR after 48 h, and protein expression of GDF15 with western blotting after 96 h. Then, the transient transfection was constructed. pcDNA-GDF15 and control vectors were diluted with culture medium without serum to the final incubation concentration (50 nM). 2.8 Wound healing The cells were plated and incubated for 24 h in 12-well plate (#3513, Corning). 1 mL tip was used to make straight line in the middle of well, and the distance between cells was captured. After 24 h, the distance between wound was tested again, and migrated distance was calculated with Image J software. 2.9 Flow cytometry The cells were cultured as described in part 2.1, and transfected with related vectors as described in part 2.7. The cells were digested with trypsin (#108444, Sigma-Aldrich) and cells were washed with PBS (#10010023, Gibco) containing propidium iodide and Annexin V-FITC (Beyotime, #C1062L, China). The cells were incubated for 30 min in the dark, and analyzed with a flow cytometer. 2.10 CCK8 assay The cells were seeded in a 96-well plate at a density of 2,000 cells per well. The cells were incubated for 24 h at 37°C in a humidified atmosphere containing 5% CO 2 . 100 μL CCK-8 reagent (Beyotime, #C0038, China) were added to each well and cells were incubated for 2 h at 37°C. The absorbance of each well at 450 nm was measured. 2.11 Statistical analysis The data were represented as mean ± standard deviation. Results were statistically analyzed with SPSS (Version 18). p-value < 0.05 was set as statistical difference. T-test and ANOVA tests were applied in this research for statistical analysis.
3. Results 3.1 Increased GDF15 expression correlates with reduced survival in melanoma patients and is positively associated with advanced disease stages Analysis using the GEPIA and TCGA databases revealed a negative correlation between GDF15 expression and survival rates in melanoma patients (Figure 1 A). A significantly higher GDF15 expression was observed in melanoma patients compared to controls (Figure 1 B). GDF15 expression levels were positively associated with tumor stage progression (Figure 1 C). Furthermore, elevated GDF15 levels were detected in melanoma tissues (Figure 1 D-E) and specific cell lines (UACC62, M14, A375, and M21) (Figure 1 F) as compared to normal skin tissues and the HEM cell line, with samples sourced from our hospital. 3.2 The knockdown of GDF15 substantially inhibited the PI3K/AKT signaling pathway in both M14 and M21 melanoma cell lines Constructed knockdown vectors for miR-424-3p were successfully transfected into UACC62 and A375 cell lines (Figure 2 A). The sh-GDF15 was observed to significantly inhibit the PI3K/AKT signaling pathway in M14 and M21 cell lines (Figure 2 B-C). Subsequent experiments explored whether activation of the PTEN/PI3K/AKT pathway by 740Y-P could counteract the effects of sh-GDF15, revealing that 740Y-P notably reversed the impact of sh-GDF15 (Figure 2 B-C). 3.3 Concurrent treatment with 740Y-P markedly enhanced cell proliferation, migration, and invasion, which were initially diminished by sh-GDF15 Cell proliferation in M14 and M21 was assessed using EdU staining and CCK8 assays. sh-GDF15 markedly reduced cell proliferation (Figure 3 A-C), a decline that was reversed upon treatment with 740Y-P (Figure 3 A-C). Similarly, cell migration and invasion, assessed via wound healing and Transwell assays, respectively, were significantly inhibited by GDF15 knockdown but were restored following 740Y-P treatment (Figure 4 A-D). 3.4 The effects of sh-GDF15 on cell apoptosis and chemotherapy resistance were mitigated by 740Y-P Increased cell apoptosis induced by sh-GDF15 was diminished following 740Y-P treatment (Figure 5 A-B). Additionally, the elevated levels of pro-apoptotic proteins Bax and cleaved caspase-3 observed post-sh-GDF15 transfection were reduced with 740Y-P treatment (Figure 5 C-D). Furthermore, sh-GDF15 was found to decrease chemotherapy resistance in M14 and M21 cell lines to agents like docetaxel and doxorubicin, an effect that was counteracted by 740Y-P administration (Figure 6 A-D). 3.5 740Y-P significantly counterbalanced the impact of sh-GDF15 on the expression of EMT-related proteins in M14 and M21 cell lines Treatment with sh-GDF15 resulted in decreased Vimentin and N-cadherin expression and increased E-cadherin levels in both M14 and M21 cell lines (Figure 7 A-B). However, simultaneous treatment with 740Y-P notably reversed these effects (Figure 7 A-B). These findings indicate that sh-GDF15 might regulate melanoma progression through the PTEN/PI3K/AKT pathway. In vivo validation confirmed sh-GDF15's tumor growth suppression (Figure 7 C), reinforcing our conclusions.
4. Discussion GDF15 is a cytokine that has been found to be upregulated in a variety of tumor types, including breast, colorectal, and lung cancer 17 . It has been suggested to play a role in tumor progression and metastasis, as well as in the regulation of cell proliferation and apoptosis 18 . Studies have also suggested that GDF15 may be involved in the regulation of angiogenesis, inflammation, and immune responses in different types of tumors 19 . It was reported that GDF15 was overexpressed in melanoma cells and was associated with depth of tumor invasion and metastasis 4 , which is in line with our findings. However, GDF15 might play different regulatory role in other kinds of tumors. GDF15 could inhibit the growth and bone metastasis of lung adenocarcinoma A549 cells through TGF‐β/Smad signaling pathway 20 . Therefore, the regulatory role of GDF15 in tumors is complicated, and needs to be further explored. The PTEN/PI3K/AKT signaling pathway has been proved to play a key role in the regulation of melanoma 21 . Activation of PI3K and AKT can lead to increased cell growth, increased cell migration, and increased resistance to apoptosis, all of which are associated with melanoma progression 22 . In addition, PI3K and AKT activation can lead to increased expression of pro-angiogenic factors, which can promote tumor growth and metastasis 23 . GDF15 has been shown to regulate the PTEN/PI3K/AKT signaling pathway in different tumors 24 . In the present study, we demonstrated that knockdown of GDF15 remarkably inhibited the PTEN/PI3K/AKT signaling pathway and malignant melanoma. In addition, activation of PTEN/PI3K/AKT signaling pathway by 740Y-P greatly reversed the influence of GDF15 on melanoma, indicating that GDF15 might affect the development of melanoma via targeting PTEN/PI3K/AKT signaling pathway. Previous study indicated that GDF15 could suppress apoptosis in cancer cells by in-activating the caspase cascade, which is in line with our data. EMT related proteins including N-cadherin, E-cadherin, and Vimentin have been believed to play a vital in the tumor metastasis 25 . High expression of N-cadherin is closely linked with tumor metastasis by promoting cell-cell adhesion and migration. Vimentin and E-cadherin have been found to be involved in the regulation of tumor cell motility, invasion and metastasis 26 , 27 . We proved that sh-GDF15 remarkably suppressed the levels of N-cadherin and Vimentin, but elevated E-cadherin. However, the relegation of GDF15 in EMT related proteins was reversed by treatment with 740Y-P. There are some limitations in this research. Firstly, what causes overexpression of GDF15 in melanoma cells remain unclear. Secondly, how GDF15 regulate various hallmarks of cancer is not clear. Meanwhile, the mechanisms by which knockdown of GDF15 can affect EMT process is not clear.
5. Conclusion In summary, we proved that sh-GDF15 could suppress the cell proliferation, migration, invasion, and EMT process of M14 and M21 cell lines through targeting PTEN/PI3K/AKT signaling pathway. The influences of sh-GDF15 on cell apoptosis and resistance to chemotherapy were reversed by 740Y-P. Meanwhile, the suppression of tumor growth by sh-GDF15 was validated in vivo level. This research might provide a novel prevention and treatment strategy for melanoma.
Competing Interests: The authors have declared that no competing interest exists. Background: Melanoma is a highly malignant tumor, and it is characterized by high mortality. Growth differentiation factor 15 (GDF15) and PTEN/PI3K/AKT signaling pathway have been proved to be related with regulation of tumors. If GDF15 could regulate melanoma through targeting PTEN/PI3K/AKT signaling pathway remain unclear. Methods: EdU staining, wound healing, Transwell assay, and flow cytometry were performed to measure cell proliferation, migration, invasion, and apoptosis. GEPIA and TCGA data bases were applied to analyze the relationship between GDF15 and prognosis. Results: We found that high expression of GDF15 suggested lower survival of melanoma patients, and is positively linked with advanced stage through analysis with GEPIA and TCGA data bases. Knockdown of GDF15 greatly inhibited the migration, invasion and proliferation ability of both M14 and M21 cells, but promoted cell apoptosis. However, the influence of GDF15 on M14 and M21 cells were reversed by 740Y-P, the activator of PTEN/PI3K/AKT signaling pathway. In addition, 740Y-P significantly reversed the influence of sh-GDF15 on the epithelial-mesenchymal transition (EMT) related proteins expression in M14 and M21 cell lines. Significant higher expression of GDF15 in melanoma was observed. In addition, the inhibition of PTEN/PI3K/AKT signaling pathway by knocking down GDF15 was observed in both M14 and M21 cell lines. sh-GDF15 greatly decreased the resistance of M14 and M21 to chemotherapy drugs, docetaxel and doxorubicin. Conclusions: GDF15 regulated the cell proliferation, apoptosis, migration, invasion, and EMT process of M14 and M21 cell lines through targeting PTEN/PI3K/AKT signaling pathway. This research provides a novel prevention and treatment strategy for melanoma.
Ethical Approval and Consent to participate The experimental protocol was approved by Shengli Clinical Medical College of Fujian Medical University, Fujian Provincial Hospital (#2022011). Availability of data and material The data and material used to support the findings of this study are included within the manuscript and supplementary files. Author contributions JZ conceived and designed the experiments; JZ and CC performed the experiments; JZ and CC wrote the paper.
Abbreviations Growth differentiation factor 15 Emergency medical technicians
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1115-1123
oa_package/de/54/PMC10788716.tar.gz
PMC10788717
0
Introduction Brain metastases is the most common distant metastasis sites of lung cancer. The whole brain radiotherapy (WBRT) and stereotactic radiotherapy (SRT) are currently common clinical treatments. Previous studies have shown that the survival time of patients receiving simple SRT was comparable to that of WBRT+SRT 1 . Cyberknife (CK) is a common SRT system used for stereotactic radiotherapy (SRT) of brain metastases. The orthogonal X-ray imaging device of CK can be obtained in real time image of the patient's skull during treatment, which can ensure treatment accuracy and achieve high-dose irradiation of the target area. However, the high prescription dose of SRT means that organs at risk (OARs) may receive higher radiation doses 2 . In addition, as the survival period of patients increases, new brain metastases in patients with lung cancer brain metastasis after receiving SRT. Most lung cancer brain metastasis patients could need for another SRT or whole brain radiotherapy. Therefore, more attention should be paid to the dose of OARs in the patient's SRT plans. Clinical studies have shown that the degree of hearing loss in patients is closely related to radiation dose received in inner ear 3 . The threshold cochlear dose for hearing loss with chemotherapy and radiotherapy combination was predicted to be 10 Gy 4 . There is currently no effective way to alleviate or treat hearing impairment of patients. Therefore, it is necessary to find a safe and effective treatment plan optimization and irradiation method to achieve radiation protection for inner ear area during the SRT treatment of brain metastases from lung cancer. This study retrospectively analyzed the CK treatment plans of 44 brain metastases patients from lung cancer. The dose distribution differences in inner ear area were compared between with and without inner ear avoidance setting plans, providing clinical reference for brain metastases patients CK SRT planning.
Materials and Methods Clinical data This study retrospectively evaluated the data of 44 brain metastases patients from lung cancer, the lesions with a range of 3 cm from ear structure (cochlea, vestibule, internal auditory canal, tympanic cavity, and bony eustachian tube). They received SRT using CK from April 2021 to April 2022, in Tianjin Medical University Cancer Institute and Hospital. Table 1 shows the patient characteristic, including 26 males and 18 females. The inclusion criteria were: 1 Histologically and/or radiologically proven non-small cell lung cancer, 2 No other malignancies diagnosed within 5 years, 3 Absence of nodal and metastatic disease, 4 Brain metastasis confirmed by magnetic resonance imaging (MRI). The exclusion criteria were: 1 Received systemic chemotherapy, 2 Received whole brain radiation therapy, 3 Received surgery for brain metastasis site. This study has been approved by the Ethics Committee of the Cancer Hospital of Tianjin Medical University, and the patient's informed consent has been obtained and signed. SRT Plan Design All patients were in supine position wearing a thermoplastic mask (CIVCO, Orange City IA, USA) before computed tomography (CT) simulation. Overall skull CT scan for the patients was performed by GE Discovery RT590, with a slice thickness of 1.25 mm. T1-weighted MRI axial sequences were acquired using a Siemens 1.5T scanner. The CT and MRI datasets of patients were imported registered in the Precision 1.1.1.1 planning system (Accuray Inc., Sunnyvale, CA, USA) for the gross tumor volume (GTV) and OARs delineation. Planning target volumes (PTV) were obtained by expanding 1.5 mm of GTV in three dimensions. The OARs included: the eyeball, lens, optic nerve, optic chiasm, brainstem, and inner ear (cochlea, vestibule, inner ear, tympanic cavity, and bony eustachian tube). All structures for SRT planning were reviewed and approved by two independent experienced radiation oncologists and a neurosurgeon. Two different treatment plans were designed for every patient, with and without inner ear avoidance setting. The plans were designed with raytracing (RT) algorithm and 6D Skull tracking method. The same collimator size and prescription isodose line (65-70%) were adopted in both plans for the same patients. All the planes needed prescription dose coverage greater than 95% of PTV volume. And the PTV received 1400-3000 cGy (median 2200 cGy) in 1-3 fractions (median 2 fractions). The same constraint conditions were applied to OARs, and the single dose limit of segmentation included: the maximum dose (D max ) of optic pathway (including optic nerve and optic chiasma) < 10 Gy and volumes receiving 8 Gy (V 8Gy ) < 0.2 cc, D max of brainstem was < 15 Gy and volumes receiving 10 Gy (V 10Gy ) < 0.5 cc, hippocampal D max ≤ 17 Gy and the mean dose (D mean ) of inner ear ≤ 15 Gy. The radiotherapy path in SRT plans was set to prohibit transmission through patients' lens, so that the lens can be well protected (D max <1Gy). Evaluation of treatment plans Design results of SRT plans with and without inner ear avoidance setting, the coverage ( Coverage ) and conformity index ( CI ) of PTV were evaluated and compared. Statistics and analysis of dose distribution differences in OARs included: hippocampal, inner ear, optic pathway, brainstem and lens. The total beam node and total monitor units (MU) were compared. Statistical methods The plan parameters conformed to normal distribution, and the results were expressed in the form of mean ± standard deviation ( ±s). Student t test was used to compare the pairwise pairwise between the planning parameters, and p -value < 0.05 was considered as statistically significant.
Results Planning parameter Differences From the results in Table 2 , it can be seen that the CI and Coverage obtained by two different plans are similar, and there is no statistical difference ( p > 0.05). This indicates that adding inner ear avoidance setting does not affect the dose distribution of the PTV, prescription dose covering the PTV in all plans can meet the clinical requirements. The total number of machine nodes and total MU in limit setting plans were higher than the other plans, with an average increase of 4.63% and 1.06%, respectively ( p < 0.05). This means that inner ear avoidance setting may limit the passage of rays through the inner ear area. In order to ensure that the PTV receives prescribed doses, SRT plan design could choose more machine nodes to transfer radiation dose. At the same time, it will increase total MU in the plans. The total number of machine nodes and total MU increase indicates longer treatment time for patients. Dose distribution differences in OARs Table 2 is the dose distribution results of OARs in two different plans. It can be seen that the SRT plans with inner ear avoidance setting can reduce D max and D mean of inner ear 13.76% and 12.15%, respectively ( p < 0.01). It means that SRT plans with inner ear avoidance setting can protect the patient's hearing system, reduce the risk of radiation hearing damage. Although the decrease in dose distribution of inner ear area possibly result in a slightly elevated dose distribution around hippocampus and brainstem. That is still within an acceptable range. Therefore the inner ear avoidance setting of SRT plans for brain metastases near the ear structure can improve the quality of life of patients after receiving radiotherapy.
Discussion SRT is a three-dimensional localization irradiation for patients' lesions through stereotactic technology, and has become one of the commonly used treatment methods for brain metastases from lung cancer in clinical practice 5 . Because of the short time and high dose irradiation in SRT plan, the OARs near the target could be exposed to higher radiation dose. When the eustachian tube and middle ear mucosa are exposed to high radiation, those structures will develop inflammatory edema. That can cause eustachian tube obstruction, resulting in increased negative pressure in the middle ear. It will eventually induce radiation otitis media in patients after SRT. Some studies have shown that about 12.6% of nasopharyngeal carcinoma patients will develop radiation otitis media after radiotherapy 6 . Hsin CH et al. showed that compared with traditional radiotherapy, intensity modulated radiotherapy (IMRT) is more likely to cause radiation damage to important structures such as the middle ear, eustachian tube and palatine veli levator muscle of patients, resulting in eustachian tube dysfunction, negative pressure formation in the tympanic chamber and secretory otitis media 7 - 8 . The study of Parham K showed that the degree of apoptosis of inner ear cells was closely related to the radiation dose and duration 9 . Jereczek-Fossa B A et al. had shown that up to 50% of patients with head and neck tumors would develop sensorineural deafness after 1 year of radiotherapy, and permanent hearing loss would occur in severe cases 10 . Therefore, in the design and implementation of SRT treatment plan for intracranial tumor patients, attention should be paid to the protection of ear structure. The CyberKnife (CK) system uses a 6 degrees-of-freedom robotic arm to drive a 6MV accelerator for radiotherapy 11 ][ 12 . Therefore, when the CK system is used to design SRT plan for brain metastatic tumor near the ear structure, the high irradiation dose of the target area can be achieved while the irradiation dose of the normal brain tissue around the target area can be minimized by selecting different nodes of the system and optimizing the number of machines on the nodes. Through the design and analysis of different CK SRT plans of 44 brain metastases from lung cancer, we found that adding the ear structure dose limits in the SRT planning could effectively reduce dose distribution in this area, while ensuring the clinical treatment effect. In terms of treatment plan parameter evaluation, CK SRT plans with and without dose limits of inner ear structure also had good CI and Coverage of the PTV, and there was no significant difference between them. This means that adding dose limits did not cause differences in dose distribution of PTV. The total number of beam nodes and MUs were increased in the SRT plans with dose limits of inner ear structure. It suggested that in order to protect the inner ear area, CK system distributed the prescribed dose to more treatment nodes and the implementation of the plan needed more time. In summary, during SRT planning for brain metastases near the inner ear structure, setting the dose limits for inner ear area can effectively reduce the radiation damage for the hearing system of patients, while ensuring the dose distribution in the target area and the therapeutic effect. However, the study of clinical treatment effect before and after the optimization of SRT plan still needs to further track more clinical cases and follow-up records for discussion, so as to establish a reliable clinical database and provide a more beneficial reference for the clinical SRT treatment of patients with head and neck tumors.
# Hua Zhang and Xuyao Yu made equal contributions to this study. Competing Interests: The authors have declared that no competing interest exists. Objective: Through retrospective statistical analysis of radiation distribution in inner ear avoidance for brain metastases from lung cancer by the CyberKnife (CK) system, it can provide a reference for stereotactic radiotherapy (SRT) planning and treatment optimization. Methods: Computed tomography/magnetic resonance imaging data of 44 patients with one brain metastases lesion from lung cancer were used to re-plan and analyze, who had been treated by CK system from April 2021 to April 2022. The prescribed doses of 14-30 Gy in 1-3 fractions was simultaneously delivered to the metastatic lesions. The SRT plans for the same patients were replaned under with and without inner ear avoidance setting. The plan parameters and dose distribution differences were compared between plans. Results: All plans met the dose restrictions. There were no significant differences in the coverage ( Coverage ), conformity index ( CI ), mean dose (D mean ), the maximum dose (D max ) and minimum dose (D min ) of planning target volume (PTV). With inner ear avoidance setting, the D max and D mean of inner ear area decreased by 13.76% and 12.15% ( p <0.01), respectively. The total number of machine nodes and monitor units (MU) increased by 4.63% and 1.06%. Conclusions: During the SRT plan designing for brain metastases from lung cancer, the dose distribution in inner ear area could be reduced by avoidance setting, and the patient's hearing would be well protected.
Funding This study was supported by Tianjin Key Medical Discipline (Specialty) Construction Project (TJYXZDXK- 010A). Ethics approval and consent to participate This study was approved by the Research Ethics Committee of Tianjin Tumor Hospital, and informed consent was obtained from the patient. Author contributions Conceptualization: X.Y.Y. and H.Z.; Methodology: X.Y.Y. and H.Z.; Formal analysis: Y.W.W.; Investigation: Z.Y.Y. and Y.D.; Writing original draft preparation: X.Y.Y. and H.Z.; Writing review and editing: X.G.W.; All authors have read and agreed to the published version of the manuscript. Abbreviations stereotactic radiotherapy CyberKnife organs at risk conformal index planned target volume gross target area mean dose maximum dose whole brain radiotherapy magnetic resonance imaging computed tomography volumes receiving 10 Gy volumes receiving 8 Gy monitor units
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1110-1114
oa_package/f3/69/PMC10788717.tar.gz
PMC10788718
0
Background Hepatocellular carcinoma (HCC) is a major cause of cancer related mortality worldwide 1 . Most patients are diagnosed within advanced stage of disease, likely due to the paucity of signs and/or symptoms of early-stage disease. Furthermore, data suggest that approved screening tests for the high-risk patients are not performed optimally 2 , 3 . Therefore, an earlier diagnosis of HCC patients could result in a substantial improvement in mortality, thereby increasing the number of patients potentially eligible for curative interventions 4 , 5 . While more than half of HCC patients are diagnosed during active surveillance, the other newly diagnosed patients are identified when they present with clinical features 6 , 7 . The presenting clinical features of newly diagnosed HCC patients are highly variable can depend on disease burden and functional status of the liver. Some patients can be completely asymptomatic, or may present with non-specific signs or symptoms, while other patients show features of liver decompensation. Other presenting clinical features may involve signs or symptoms related to mechanical compression, distant metastasis, and/or paraneoplastic syndromes 8 . Previous studies demonstrate that approximately 20% of HCC patients may display diverse clinical and biochemical abnormalities that are not related to disease burden, invasiveness, or distant metastasis. These features can either precede the diagnosis of HCC or present concurrently with HCC. Such paraneoplastic syndromes can appear as the only presenting sign or symptom. It has also been suggested that these manifestations occur secondary to secretion by neoplastic cells of various active molecules, which can exert physical and/or biochemical changes in affected patients. The other proposed hypothesis emphasizes an immune mediated antigen-antibody interaction between neoplastic and normal cells where the newly developed antibodies may cross-react with normal tissues causing damage 9 , 10 . Amongst various paraneoplastic phenomena, hypoglycemia, hypercalcemia, hypercholesterolemia, erythrocytosis and thrombocytosis are the more frequent abnormalities observed in HCC 11 . Paraneoplastic cutaneous manifestations occur less commonly in HCC patients. In these syndromes, patients present with skin abnormalities secondary to the systemic effect of neoplastic cells, which are not related to the underlying liver disease, distant metastasis, or treatment side effects. Previous studies emphasized that paraneoplastic cutaneous manifestations of HCC are more commonly detected in males and in cirrhotic patients 7 , 12 . Other studies indicated associations with particular cutaneous abnormalities and some underlying liver diseases 7 . However, there is a paucity of published data regarding the clinical significance of paraneoplastic cutaneous manifestations in HCC 13 . In our study, we performed a systematic review and meta-analysis which aimed to explore the association between paraneoplastic cutaneous manifestations in HCC patients and clinical indicators, including outcomes such as survival, response to skin-directed therapy and response to cancer-directed therapy. We also attempted to evaluate the association of paraneoplastic cutaneous syndrome with disease onset, course, underlying liver disease, disease burden and association with distant metastasis.
Materials and Methods This study was performed in adherence with the Preferred Reporting Items for Systematic review and Meta-Analyses (PRISMA) guidelines. A systematic search was performed in December 2022 using MEDLINE (host: PubMed and Scopus) and supplemented by a search of Google Scholar. Inclusion criteria included case reports and or case series, involving patients with HCC, who had paraneoplastic cutaneous manifestations at any stage of their disease course. Articles that featured HCC patients with cutaneous abnormalities secondary to other causes, such as direct invasion, distant metastasis or treatment-related side effects were excluded. Review articles and case reports that did not report relevant clinical indicators, such as disease course and response to treatment were also excluded. Each article was screened by two independent reviewers (A.T and D.J). Discrepancy was addressed initially by consensus or if consensus could not be reached with discussion with a third reviewer (L.A.). Data extraction was performed separately by two reviewers. Extracted data included: name of the first author, year of publication, number of included cases in each study, and country of origin. We additionally reported type, onset, and response to treatment for each reported paraneoplastic cutaneous abnormality. The type of reported paraneoplastic cutaneous feature was extracted from each report as was whether the diagnosis was made based on the result of skin biopsy or by clinical evaluation. Extracted data also included any underlying liver disease, age, sex, and prior alcohol consumption. Disease features such as tumor size, tumor location, disease stage, presence of vascular invasion and/or metastatic disease, alpha fetoprotein (AFP) level, treatment course of HCC, and the temporality between HCC diagnosis and paraneoplastic cutaneous manifestations were also collected. Outcomes of interests included survival (defined by the status of an included patient at the time of reporting rather than at a particular timepoint in follow-up), response to cancer-directed therapy, defined as the observed response to treatment of HCC alone or when the treatment was applied at the same time with skin treatment. Response to skin-directed therapy, defined as observed response to skin treatment alone or when the skin treatment was separated in time from HCC treatment as per investigator assessment. The quality assessment of the included studies was performed using the Joanna Briggs Institute (JBI) critical appraisal framework. Data synthesis and statistical analysis Outcome of interests, including survival, response to cancer-directed and/or skin directed therapy were collected for each included individual. Categorical variables were presented either as counts or percentages, while continuous quantitative variables were displayed using means and range or standard deviation. Data analysis was performed using the Statistical Package for the Social Science (SPSS®) version 25 software (IBM, Armonk NY). The odds ratio (OR) of dichotomous outcomes was calculated using logistic regression. The following variables were explored: age, sex, type and onset of cutaneous manifestation, underlying liver disease, type of skin-directed therapy, stage of HCC, presence of disease recurrence, vascular invasion and distant metastasis, type of cancer-directed therapy, and alfa fetoprotein level. The p-value was calculated using chi-square, for categorical variables, and T-test, for continuous one. Statistical significance was defined as p-value < 0.05. In light of the expected small sample size, multivariable modeling was not performed, and no corrections were made for multiple significance testing. However, quantitative significance as defined by Burnand et al. was explored in addition to statistical significance 14 . Images of the Paraneoplastic cutaneous manifestations Images for the most common paraneoplastic cutaneous manifestation of hepatocellular carcinoma were provided in Supplementary Material 1.
Results A total of 48 studies, comprising 60 patients with HCC were analyzed (see figure 1 for study selection schema) 15 - 62 . Supplementary Table 1 displays the detailed characteristics of the included studies. The median duration of follow up for all included case reports was around 16.4 months (range 2-62 months), while the median age of included patients was 62 years (range 18-90 years). Included studies comprised 50 case reports and one case series. Three-quarters of included patients (n: 45) were males. The most frequent reported skin abnormalities were dermatomyositis (n: 14), pityriasis rotunda (n: 10), and porphyria (n: 7). The majority of patient who presented with dermatomyositis had underlying viral hepatitis, while all reported porphyria and acanthosis cases were associated with metabolic causes of HCC, such as NAFLD (see Table 1 ). Most skin changes were detected early during the disease course, either shortly before or just after the diagnosis of HCC, while only one sixth of included patients had their paraneoplastic cutaneous manifestations discovered at a late stage or at disease relapse. More than half of the included patients had their skin manifestations as the initial presenting symptom. Only 10 patients (17%) had resectable HCC. Approximately half of the included patients received skin-directed treatment with local or systemic corticosteroids, but only a few required other immunosuppressive therapies. Most skin abnormalities responded to definitive cancer-directed therapy while one out of three patients did not show any response to cancer treatment. Chronic infections with hepatitis B (HBV) and Hepatitis C viruses (HCV) were the most frequently reported underlying liver disease. Reports of paraneoplastic skin changes were more common in patients with metastatic disease compared to those with an early-stage disease. In our study, one out of four patients underwent a definitive locoregional intervention, while one out of three received systemic therapy for an advanced stage HCC. Finally, most reported cases were found in Asia and Europe (see Table 2 ). Of note, data regarding outcomes of interests, disease and patient characteristics were not reported explicitly in some of the included case reports. For example, survival data were missing in 18 (30%) patients, while response to cancer-directed and skin-directed therapy were missing in 29 (48%) and 25 (42%) patients respectively. The type of paraneoplastic cutaneous manifestations was not specified clearly in 9 (15%) patients, and the underlying liver disease was not mentioned in 6 (10%) patients. Cancer stage and AFP levels were missing in 7 (12%) and 5 (8%) patients, respectively. Death The probability of death was lower among men as compared to women, a quantitatively significant association which approached, but did not meet statistical significance (OR 0.37, 95% CI: 0.11 to 1.2; p = 0.051). Amongst all reported skin abnormalities, pityriasis rotunda was associated with both quantitative and statistically significantly lower risk of death (OR: 0.05, 95% CI: 0.003 to 0.89; p = 0.04), while patients who presented with dermatomyositis had a quantitatively and statistically significant higher risk of death compared to other paraneoplastic skin manifestations, (OR: 3.37, 95% CI: 1.01-12.1; p = 0.03). As expected, patients who presented with paraneoplastic skin abnormalities concurrently with an early-stage disease had lower odds of death compared to patients who presented with later disease stages (OR: 0.15, 95% CI: 0.028 to 0.76; p = 0.01). The use of local or systemic corticosteroids was associated with better improved survival in patients who had cutaneous abnormalities in which corticosteroids are indicated as a therapy (OR: 0.24, 95% CI: 0.06 to 0.92; p = 0.03). The risk of death was not affected by the underlying cause of HCC. Patients with pre-existing viral hepatitis had similar odds for death, compared to non-viral conditions (OR: 1.1, 95% CI: 0.4 to 3.1; p = 0.43). The onset of skin manifestations appeared to have an impact on the risk of death. Patients who had paraneoplastic cutaneous manifestations prior to HCC diagnosis were quantitatively and statistically more likely to be alive at the time of reporting compared to others (OR: 0.30, 95% CI 0.09 to 0.99; p = 0.02). Response of paraneoplastic skin manifestation to skin-directed therapy Only twelve patients (20%) showed complete response of cutaneous abnormalities to skin-directed therapy, while the same number of patients were observed to have a partial response. With regards to specific paraneoplastic cutaneous manifestations of HCC, no patients with prurigo were observed to respond to skin-directed therapy. Compared to all other skin conditions, prurigo was associated with quantitatively and statistically lower odds of response (OR: 0.04, 95% CI: 0.002 to 0.9, p = 0.04). The odds of improvement of cutaneous lesions, in response to skin-directed therapy, were not affected by age or sex of included patients (OR: 0.99, 95% CI: 0.93 to 1.05, p = 0.51 and 0.65, 95% CI: 0.1 to 4.18, p = 0.69, respectively). There was a quantitatively significant, but statistically non-significant association between use of corticosteroids and improvement in paraneoplastic skin (OR: 0.21 95% CI: 0.04 to 1.11, p = 0.07). The response to skin directed therapy was not quantitatively or statistically different among patients with viral and non-viral underlying liver diseases (OR: 1.87, 95% CI: 0.39 to 9.01, p = 0.43). Response of paraneoplastic cutaneous manifestations to cancer-directed therapy About 20 out of 34 (59%) patients showed an improvement in their cutaneous abnormalities, following cancer-directed therapy. There was a quantitatively significant but statistically non-significant association between the type of cancer-directed therapy and the likelihood of improvement of cutaneous lesions. For example, 7 out of 9 patients who had undergone curative surgical intervention observed improvement in skin lesions (OR 0.32, 95% CI: 0.06 to 1.8, p = 0.19). Conversely, only 2 out of 8 patients, treated with trans arterial chemoembolization (TACE), showed improvement in skin lesions (OR: 3.4, 95% CI 0.6 to 19, p = 0.17). None of the 5 patients who receive chemotherapy showed any resolution in their skin lesions (OR: 12.3, 95% CI 0.63 to 240, p = 0.09). Only 2 out of 7 patients who received multiple therapeutic modalities observed any alleviation in skin lesions OR: 2.8, 95% CI 0.48 to 16.5 p = 0.25. Again, age and sex did not affect response of skin lesions to cancer directed therapy. Onset and type of paraneoplastic cutaneous skin manifestations had no impact on response to cancer-directed therapy.
Discussion Hepatocellular carcinoma is the most prevalent primary liver cancer, representing 90% of all histological subtypes. Worldwide, it is also one of the most common malignancies in both men and women. Consequently, HCC remains a major cause of cancer-related mortality, exceeded only by lung and gastric cancers 63 , 64 . Paraneoplastic syndromes have been described with HCC, and most features were found to be associated with worse survival, likely as a consequence of an association with a higher burden of disease 65 , 66 . However, it has also been suggested that specific paraneoplastic features have a predilection to occur early during the disease course while others are detected usually at later stages of the disease 67 . HCC is associated with distinctive skin abnormalities, that are not related to distant metastasis or drug toxicity, reflecting a paraneoplastic process. Paraneoplastic cutaneous manifestations represent a wide range of skin abnormalities detected in patients with HCC, which could have an impact on outcomes. However, the clinical significance of paraneoplastic cutaneous manifestations in HCC is not clear. This is likely due to the low prevalence, compared to the other paraneoplastic phenomena that may accompany HCC 68 . Considering the recent introduction of multiple systemic therapies to the treatment armamentarium of advanced HCC, mortality rates have improved considerably 69 . However, most of the recently approved agents are either kinase inhibitors or immune checkpoint inhibitors. These agents are known to exert different forms of skin toxicities 70 , 71 , making the differentiation between paraneoplastic features and treatment-related side effects of great interest, mainly to improve treatment decisions upon the affected patients. In addition, cancer can spread through blood or lymphatics, causing different forms of metastatic skin lesions, where sometimes, reliance on clinical features alone is not enough to establish the potential cause of skin abnormalities, and skin biopsies are required to differentiate between these varying features 72 . Extrapolating from the published data about the effect of non-cutaneous paraneoplastic syndromes 65 - 67 , identification of most paraneoplastic syndromes may indicate a higher disease burden, therefore, we performed a systematic review focusing on HCC patients who had confirmed paraneoplastic cutaneous manifestations at any point during their diseases course. Our results confirmed a significant association between the onset of paraneoplastic cutaneous manifestation and the odds of death at the time of case reporting, where patients who presented with cutaneous abnormalities prior to HCC diagnosis had more favorable survival. Though this result may have been confounded by lead-time bias. There was a substantial variation in survival among patients who presented with different forms of paraneoplastic cutaneous manifestations, where certain features had worse outcomes than others. As expected, patients who had advanced HCC and patients who did not receive definitive treatment, for HCC, had worse outcomes. In contrast, the use of certain therapeutic agents such as corticosteroids, particularly in those with cutaneous diseases that are managed typically with steroids, was associated with better survival. However, in this observational study, the impact of confounding by indication cannot be excluded especially as the small sample size did not allow for multivariable analyses. Certain types of paraneoplastic cutaneous manifestations did not show any improvement with skin-directed therapy, while most skin changes responded to cancer-directed therapy. Finally, a significant association was observed between certain cutaneous abnormalities and particular liver diseases. Our study represents an extensive systematic review that examined the clinical significance of different paraneoplastic cutaneous manifestations in HCC patients 8 . Paraneoplastic cutaneous manifestations represent a heterogenous group of skin abnormalities, which make it statistically challenging to examine the clinical significance of different cutaneous lesions. In our study, we divided skin abnormalities into subgroups, studying the relative impact of each type compared to the other skin changes. Furthermore, the association between paraneoplastic cutaneous manifestations and other diseases and patients' characteristics were examined. This study has some limitations. First, there appeared to be various forms paraneoplastic cutaneous manifestations reported in the included case reports, which result in heterogeneity of the summary data. Furthermore, comparison groups were often small resulting in low power to observe statistical significance. While we attempted to explore quantitative as well as statistical significance to address this low statistical power, residual uncertainty will remain. Second, even though our study includes a relatively large number of patients, missing data regarding patients and disease characteristics and outcome of interest were noted in many of the included case reports. This, too, could have affected certainty of our results. Third, we only analyzed the group of patients who had paraneoplastic cutaneous manifestations and HCC. These methods may increase the risk of selection bias. Additionally, we were unable to compare effects of paraneoplastic cutaneous syndrome in affected patients and other HCC patients with no skin involvement and focused instead on differences in outcome and response to therapy between different skin conditions seen in the setting of HCC. Fourth, we acknowledge substantial heterogeneity with lack of standardization in patients' diagnoses and differences in treatment options. We also note that outcomes of interests were not reported consistently amongst all included case reports. To overcome these limitations, we tried to create unified standards in data synthesis and reporting. However, there may be residual uncertainty relating to this heterogeneity. Finally, as time to event data were not reported routinely, the analysis for survival was performed using the status of patients at the time of reporting. This would have resulted in follow-up bias with studies with longer follow-up observing a larger proportion of deaths. However, considering the unfavorable outcomes of HCC patients in general, a median follow-up duration of 29 months would be considered adequate and should minimize the effect of follow up duration on the analysis. In summary, paraneoplastic cutaneous manifestations represent a heterogenous group of skin abnormalities with a varying prognostic impact on HCC patients. Multiple factors such as onset and type of skin abnormalities, cancer burden, and type of treatment were shown to be associated with outcomes. The results of our study also provide clinicians with useful data about the response to cancer and/or skin directed therapies, which may help clinicians to make better treatment-related decisions in HCC patients who present with paraneoplastic cutaneous manifestations. Further research is needed to examine the effect of paraneoplastic cutaneous manifestations in relation to other paraneoplastic syndromes, and to evaluate the exact burden of paraneoplastic cutaneous manifestation in affected patients compared to other HCC patients.
Competing Interests: The authors have declared that no competing interest exists. Background: There remains a scarcity of published data on the clinical significance of paraneoplastic cutaneous manifestations in hepatocellular carcinoma (HCC). Method: A systematic search of MEDLINE was performed in December 2022. Inclusion criteria comprised studies reporting on patients with HCC, who had paraneoplastic cutaneous manifestations. Outcomes of interests comprise survival and response to cancer-directed and/or skin directed therapy. Results: A total of 48 studies comprising 60 HCC patients were included in the analysis. The most frequent reported skin abnormalities were dermatomyositis, pityriasis rotunda, and porphyria. Most patients presented with dermatomyositis had underlying viral hepatitis, while all reported porphyria and acanthosis cases were associated with metabolic causes of HCC, such as steatosis. Paraneoplastic skin changes were more common in patients with metastatic disease. Pityriasis Rotunda was associated with the lowest risk of death, (OR: 0.05, 95% CI: 0.003 to 0.89; p = 0.04), while dermatomyositis had a statistically significant higher risk of death (OR: 3.37, 95% CI: 1.01-12.1; p = 0.03). Most patients showed an improvement in their cutaneous abnormalities, following cancer-directed therapy. Conclusion: Paraneoplastic cutaneous manifestations are reported more frequently in patients with a higher burden of disease, especially presence of metastases. Certain cutaneous manifestations have prognostic implication.
Supplementary Material
Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Ethical approval & informed consent Meta-analysis is exempt from ethics approval as study authors collected and synthetized data from previous studies in which informed consent has been obtained. Author contributions Laith Al-Showbaki : Conception and design of work, Data analysis and interpretation, Drafting the article, Final approval of the version to be published. Ahmad A. Toubasi : Data collection, Data analysis and interpretation, drafting the article. Dunia Z. Jaber : Data collection, drafting the article. Mohammad Al Shdifat : Data collection, drafting the article. Noor Al-Maani : Critical revision of the article. Omar Qudah : Critical revision of the article. Feras Farargeh : Critical revision of the article. Eitan Amir : Drafting the article, Critical revision of the article. Data availability statement The authors confirm that the data supporting the findings of this study are available within the article (references), [and/or] its supplementary materials. Other supporting data are available from the corresponding author, [L.A], upon reasonable request.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1021-1029
oa_package/da/34/PMC10788718.tar.gz
PMC10788719
0
Introduction Hepatocellular carcinoma (HCC) is the sixth most common cancer and ranks as the third leading cause of tumor-related death worldwide 1 . Although surgery is the best way to treat HCC 2 , due to the insidious onset of HCC and early metastasis, the majority of patients are already in advanced stages when diagnosed and are no longer eligible for surgery 3 . In recent years, the development of immunotherapy and targeted therapy has brought new hope for advanced HCC 4 . Therefore, it is of great importance to find novel therapeutic biomarkers and targets. FEN1, located on human chromosome 11q12-13, is a structure-specific nuclease involved in DNA replication, synthesis, damage repair, nonhomologous end-joining and homologous recombination 5 - 7 . In this regard, FEN1 is essential for the maintenance of genomic stability 8 , 9 . Previous studies have shown that FEN1 is abnormally expressed in lung, breast, gastric, prostate and other types of cancer and is closely correlated with the occurrence and development of tumors 10 - 12 . It has been reported that FEN1 is highly expressed in liver cancer 13 . FEN1 is also highly expressed in HCC and can promote HCC progression through methylation, ubiquitination, and action on miRNAs 13 - 15 , but its role in the HCC cell cycle remains unknown. In view of the role of FEN1 in DNA replication, we speculated that FEN1 is crucial for the proliferation of HCC cells. In this study, we conducted bioinformatics prediction and clinical specimen verification, which confirmed that FEN1 was highly expressed in HCC and correlated with poor prognosis. We found that FEN1 could promote the proliferation, colony formation, wound healing, migration, and invasion of HCC cells. Gene set enrichment analysis (GSEA) revealed that high expression of FEN1 was significantly related to the cell cycle pathway. Cellular experiments and molecular experiments demonstrated that FEN1 regulates the cell cycle transition from G2 to M phase by modulating Cdc25C, CDK1 and Cyclin B1, thus promoting the proliferation of HCC cells. Our study suggests that FEN1 may be a potential target for the treatment of HCC.
Material and methods HCC Datasets TCGA-LIHC and corresponding clinical data used in this study were downloaded from The Cancer Genome Atlas (TCGA) portal ( https://gdc-portal.nci.nih.gov/ ). Clinical specimen collection The HCC and adjacent normal tissues analyzed in this study were collected from patients at Shanghai General Hospital between January 2013 and December 2015. Inclusion: (1) age >18 years; (2) primary liver cancer; (3) no preoperative treatment such as immunotherapy, chemotherapy, or radiotherapy; exclusion of any T, N, or M staging unknown. These patients did not receive immunotherapy, chemotherapy or radiotherapy before surgery. This research was approved by the Ethics Committee of Shanghai General Hospital and informed consent was obtained from all patients enrolled in the study. Real-time quantitative PCR (RT-qPCR) According to the manufacturer's instructions, TRIzol (Takara Biotechnology, Japan) was used to extract total RNA from the tissue samples and HCC cell lines. Then, we synthesized cDNA using a reverse transcription kit (Takara Biotechnology, Japan) for subsequent PCR assay. The relative mRNA expression levels were normalized to GAPDH and calculated by the 2 -ΔΔct method. All samples were analyzed in three replicates. The primers are shown in Table S1 . Immunoblot analysis Total protein was extracted from the tissue samples and cells, and the protein concentrations were quantified using a BCA kit (Yeasen, Shanghai, China). Equivalent proteins were separated by SDS-PAGE and transferred to PVDF membranes (Millipore, Billerica, MA, USA). Then, the membranes were blocked and incubated with primary and secondary antibodies. We used ECL chemiluminescence to detect protein signals. GAPDH was used as the internal reference protein. Antibodies against the following proteins were used: GAPDH (60004-1-Ig, Proteintech), FEN1 (ab133311, Abcam), Cdc25C (ab32444, Abcam), CDK1 (ab133327, Abcam) and Cyclin B1 (ab32053, Abcam). Immunohistochemistry (IHC) The sections were baked at 56°C for 2 h for dewaxing, boiled in citrate buffer for antigen retrieval, and blocked using 3% hydrogen peroxide. The contents were incubated with the primary antibody against FEN1 (1: 200, ab133311, Abcam) at 4°C overnight and biotinylated with a goat anti-rabbit secondary antibody for 1 h. At last, the reaction was visualized using DAB, and the sections were counterstained with hematoxylin. IHC scores were calculated by multiplying the percentage and intensity score of stained cells (staining intensity: negative = 0, weak = 1, moderate = 2, strong = 3, and staining extent: 0 = no staining, 1 = 0%-25%, 2 = 25%-50%, 3 = 50%-75% and 4 = 75%-100%). The total score was calculated as intensity score × extent score. Scores of > 4 were regarded as having high FEN1 expression while those with 1, 2, 3 and 4 were regarded as having low FEN1 expression. Cell culture and transient transfection HCC cell lines (Huh-7, Hep-3B, Hep-G2, Bel-7402, SMMC-7721 and HCCLM3) were cultured in a humidified incubator containing 5% CO 2 at 37°C. Lentivirus was produced by transfection of HEK-293T cells with psPAX2 and pMD2.G plasmids using Lipofectamine 2000 (Invitrogen, CA, USA) according to the manufacturer's instructions. The sequences of shRNAs are shown in Table S2 . CCK-8 assay Cell viability was measured using the Cell Counting Kit-8 (CCK-8) assay (NCM Biotech, Suzhou, China) to evaluate cell proliferation. Cells were seeded into 96-well plates at 2000 cells per well. The absorbance at 450nm was measured with a spectrophotometer at different time points (0, 12, 24, 48, and 72 h). Colony formation assay Cells were seeded into 6-well plates at 1000 cells per well. After 2 weeks of culture, the cells were fixed and stained with crystal violet. Scratch wound healing assay Cells were seeded into 6-well plates and cultured to 85% confluence. The cell layers were scratched using a sterile 200μL pipette tip and then washed three times with PBS to remove the scratched cells. Then, the remaining cells were cultured in serum-free DMEM (Gibco, USA). Cells were observed using an inverted microscope and photographed at 0 and 48 h respectively. Transwell assay Transwell assay was conducted for cell migration and invasion studies. Cells in serum-free medium were seeded into the upper chamber, and 600μL DMEM containing 10% FBS (Gibco, USA) was added to the lower chamber. Matrigel (Corning, NY, USA) was used to precoat the upper chamber before cell seeding. The cells were fixed after 24 h of culture. Cells on the underside of the membrane were stained with 0.1% crystal violet and counted under a microscope. Flow cytometry A cell cycle detection kit (MultiSciences, Hangzhou, China) was used to assess the cell cycle of HCC cells according to the manufacturer's instructions. EdU assay Detection of EdU was conducted with the Cell-Light EdU Apollo 567 (catalog no. C10310-1; RiboBio) according to the manufacturer's instructions. Statistical analysis The relationships between FEN1 expression and clinicopathological features of HCC patients were analyzed by chi-square test. Kaplan-Meier method with log-rank test was used for survival analysis. The boundary value of FEN1 expression was determined by its mean value. The data are presented as the mean ± SD. Student's t-test was used to compare two groups, and one-way ANOVA was used for comparison among multiple groups. P < 0.05 indicated a significant difference.
Results FEN1 is upregulated in HCC and correlated with poor prognosis To explore the expression levels of FEN1 in HCC, we first downloaded the gene expression data of HCC tissues from the TCGA database and analyzed the mRNA expression level of FEN1. The results showed that FEN1 was upregulated in HCC tissues compared with normal tissues (Figure 1 A). Analysis of the mRNA data in the Oncomine database yielded similar results ( Figure S1 A). Paired comparisons showed that FEN1 was highly expressed in HCC tissue compared with adjacent normal tissues from the TCGA database ( Figure S1 B). Next, we used qPCR to analyze FEN1 mRNA levels in 32 pairs of HCC and matched adjacent normal tissues from Shanghai General Hospital. The results showed that 24/32 (75%) of the HCC tissues exhibited higher FEN1 mRNA levels than the corresponding adjacent normal tissues (Figure 1 B). In addition, the results of Western blot showed that the protein expression levels of FEN1 were also upregulated in HCC tissues compared with adjacent tissues (Figure 1 C). Moreover, IHC staining analysis was conducted to determine FEN1 protein expression in tissue microarray (TMA) containing 57 pairs of HCC and matched adjacent normal tissues. The protein expression levels of FEN1 were higher in HCC tissues compared with adjacent tissues (Figure 1 D and Figure S1 C). In summary, these findings indicated that FEN1 was upregulated in HCC tissues at both the gene and protein levels. In addition, we analyzed the correlations between FEN1 protein expression and HCC clinicopathological features. 57 HCC patients were divided into FEN1 high-expression (61.4%, 35/57) and low-expression (38.6%, 22/57) groups according to IHC score. We found that high expression of FEN1 was positively correlated with tumor T stage, tumor M stage, tumor stage and tumor grade (Figure 1 E-G and Figure S1 D), while there was no correlation between FEN1 expression and gender, age or tumor N stage (Table 1 ). In addition, Kaplan-Meier survival analysis revealed that HCC patients with high FEN1 expression had worse prognosis than those with low FEN1 expression (Figure 1 H). Analysis of the public data from the TCGA database also showed similar results ( Figure S1 E). FEN1 promotes the proliferation, migration and invasion of HCC cells We used qPCR and Western blot to detect the expression of FEN1 in HCC cell lines. Among the six HCC cell lines, Bel-7402 and Hep-3B cells showed the highest and lowest FEN1 expression levels (Figure 2 A). Therefore, we selected Bel-7402 and Hep-3B cell lines for further experiments and established cell models of FEN1 knockdown (Bel-7402) and overexpression (Hep-3B) ( Figure S2 A-B). The results of CCK-8 showed that the knockdown of FEN1 in Bel-7402 cells significantly reduced cell viability compared with the levels observed in the control group, while overexpression of FEN1 increased Hep-3B cell viability (Figure 2 B). Colony formation assay showed that the number of colonies was decreased significantly under FEN1 knockdown. In contrast, Hep-3B cells overexpressing FEN1 yielded the opposite results (Figure 2 C). In addition, the wound healing capability of sh-FEN1 cells was significantly weaker at 48 h post-scratch compared to the control group. On the contrary, cells overexpressing FEN1 displayed significantly enhanced wound healing ability (Figure 2 D). Regarding cell migration and invasion, Transwell assays showed that FEN1 knockdown markedly reduced the migration and invasion abilities of Bel-7402 cells. Conversely, overexpression of FEN1 in Hep-3B cells significantly enhanced the migration and invasion potentials of the cells (Figure 2 E-F). FEN1 promotes the proliferation of HCC cells through activating cell cycle progression from G2 to M phase To further explore how FEN1 promotes HCC cell proliferation, GSEA of FEN1 high- and low-expression patient groups from the TCGA database was carried out. We selected the most significantly enriched pathway according to the normalized enrichment score (NES). The results showed a significant enrichment of cell cycle pathways in the patient group with the FEN1 high-expression phenotype (Figure 3 A). Subsequently, we analyzed cell cycle progression in Bel-7402 and Hep-3B cells by fluorescence-activated cell sorting (FACS). As shown in Figure 3 B, FEN1 knockdown resulted in a marked increase in the percentage of G2/M-phase cells, whereas the percentage of G1-phase cells was decreased. In contrast, FEN1 overexpression resulted in a decrease in the percentage of G2/M-phase cells but an increase in the percentage of G1-phase cells. Furthermore, EdU assay demonstrated that FEN1 knockdown inhibited the proliferation of Bel-7402 cells, while FEN1 overexpression promoted the proliferation of Hep-3B cells (Figure 3 C). These results suggest that FEN1 may promote HCC cell proliferation by activating cell cycle progression from G2 to M phase. FEN1 regulates the cell cycle transition from G2 to M phase by modulating Cdc25C, CDK1 and Cyclin B1 expressions To investigate the underlying mechanism of the proliferation-promoting function of FEN1, cell cycle-related proteins were analyzed using Western blot and RT-qPCR. The Cyclin B1/CDK1 complex is crucial to regulating G2 transition 16 . Moreover, Cdc25C can activate the Cyclin B1/CDK1 complex by inducing CDK1 dephosphorylation to promote mitotic cell G2/M transition 17 . Western blot demonstrated that the knockdown of FEN1 decreased the protein level of Cdc25C in Bel-7402 cells, and the expression levels of CDK1 and Cyclin B1, the downstream proteins of Cdc25C, were also decreased under FEN1 knockdown. Conversely, overexpression of FEN1 increased the level of Cdc25C, CDK1 and Cyclin B1 in Hep-3B cells (Figure 4 A). The RT-qPCR results showed similar alterations to the protein expressions (Figure 4 B). In addition, we analyzed gene expression correlations between FEN1 and Cdc25C, CDK1 or Cyclin B1 using a bioinformatics tool, GEPIA ( http://gepia.cancer-pku.cn/ ). The results showed that FEN1 mRNA level was positively correlated with the mRNA expression levels of Cdc25C, CDK1 and Cyclin B1 (Figure 4 C-E). In summary, our data suggest that FEN1 may promote cell cycle transition from G2 to M phase by modulating Cdc25C, CDK1 and Cyclin B1 expressions, thus promoting the proliferation of HCC cells.
Discussion As the third leading cause of cancer-related deaths in the world, liver cancer has brought a heavy cancer burden to many countries 16 . Given the low detection rate of early-stage liver cancer, metastasis and intraperitoneal spread have already occurred at the time of diagnosis in the majority of patients; the overall 5-year survival rate for liver cancer patients is only about 20% 17 , with poor prognosis. The occurrence and development of liver cancer is a complex biological process involving the interaction of multiple molecules that are regulated by key genes 18 , 19 . The research on the regulatory mechanism of key genes and the progress of molecularly targeted drugs provides new hope for the treatment of liver cancer 20 , 21 . As a structure-specific 5'-nuclease, FEN1 plays important roles in DNA replication and damage repair 22 . In addition, studies have shown that FEN1 is highly expressed in various types of cancer cells and is closely associated with the occurrence and development of tumors 10 , 12 . These findings suggest that FEN1 may act as a double-edged sword in cancer. Our study showed that FEN1 was significantly upregulated in HCC and that high FEN1 expression was associated with higher tumor T stage, tumor M stage and tumor stage. Moreover, K-M analysis revealed that high expression of FEN1 is indicative of poor prognosis in HCC patients, consistent with the findings of Li et al 13 . Next, we revealed that the knockdown of FEN1 inhibited the proliferation, migration and invasion of HCC cells, whereas the overexpression of FEN1 promoted cell proliferation, migration and invasion, indicating that FEN1 plays a vital role in the development of HCC. Cell cycling is a complex process involving a series of cell cycle regulators 23 , 24 . At different times, different cell cycle regulators have different expression and degradation patterns, culminating in the division of a mother cell into two daughter cells through mitosis 25 , 26 . In this study, we found through GSEA analysis that high expression of FEN1 was closely associated with the cell cycle. In addition, cell cycle and functional experiments showed that FEN1 knockdown could inhibit cell proliferation by inducing cell cycle arrest from G2 to M phase. As an important cell cycle regulatory protein, Cdc25C is involved in activating the Cyclin B1/CDK1 complex in cells to initiate mitosis 27 . The Cyclin B1/CDK1 complex is a key regulator of G2/M transition 28 . The Cyclin B1/CDK1 complex can phosphorylate varieties of proteins prior to G2/M transition, which starts the mitotic events, including nuclear envelope breakdown, centrosome separation and chromosome condensation 29 . In this study, we found that FEN1 expression was positively correlated with the expression levels of Cdc25C, CDK1 and Cyclin B1. Moreover, the results given by the bioinformatics tool (GEPIA) came to the same conclusion. However, our studies only investigated the correlation between FEN1 and Cdc25C, CDK1 or Cyclin B1, and the further mechanism remains to be explored. In summary, our study suggests that FEN1 promotes the proliferation, migration and invasion of HCC cells by activating cell cycle transition from G2 to M phase though modulating Cdc25C, CDK1 and Cyclin B1 expressions. FEN1 is an important biomarker for predicting the prognosis of HCC patients. Our findings may provide a new focus in the search for treatment strategies for liver cancer.
# This author contributed equally to this work. Competing Interests: The authors have declared that no competing interest exists. Flap endonuclease 1 (FEN1) is a structure-specific nuclease that is involved in the occurrence and development of various types of tumors. Previous studies have shown that FEN1 plays an important role in the development of hepatocellular carcinoma, however, the molecular mechanisms remain fully elucidated, especially its effect on the cell cycle of hepatocellular carcinoma has not been investigated. In this study, via bioinformatics prediction and clinical specimen verification, we confirmed that FEN1 was highly expressed in HCC and correlated with poor prognosis. The knockdown or overexpression of FEN1 could inhibit or promote the proliferation and invasion of HCC cells. Importantly, cell cycle and functional experiments showed that FEN1 could promote cell proliferation by inducing cell cycle transition from G2 to M phase. Further studies indicated that FEN1 regulated the G2/M transition by modulating cell division cycle 25C (Cdc25C), cyclin-dependent kinase 1 (CDK1) and Cyclin B1 expressions. To sum up, our research suggested that FEN1 could promote the proliferation, migration and invasion of HCC cells via activating cell cycle progression from G2 to M phase, indicating that FEN1 may be a potential target for the treatment of HCC.
Supplementary Material
This work was supported by the National Natural Science Foundation of China (81970568, 81670595) and the cultivation project of Huadong Hospital for National Natural Science Foundation program (GZRPY009Y). Author Contributions Conceptualization, Tao Wang, Haijiao Zhang, Junming Xu and Rangrang Wang; Data curation, Dan Huang, Rangrang Wang and Haijiao Zhang; Project administration, Tao Wang and Yang Zhang; Experiments, Tao Wang, Dan Huang, Hiajiao Zhang and Yang Zhang; Funding acquisition, Rangrang Wang and Junming Xu. Investigation, Dan Huang, Haijiao Zhang and Yang Zhang; Data analysis, Tao Wang, Rangrang Wang, Haijiao Zhang and Dan Huang; Writing, Tao Wang, Hiajiao Zhang and Rangrang Wang; Writing review and revision, Rangrang Wang, Hiajiao Zhang, Yang Zhang and Junming Xu.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):981-989
oa_package/a0/0a/PMC10788719.tar.gz
PMC10788720
0
1. Introduction Gastric cancer (GC) is a common malignant tumor of the digestive system, with various risk factors contributing to its development, including Helicobacter pylori infections, low socioeconomic status, dietary factors, family history, and inherited predisposition 1 . Globally, GC poses a significant burden, accounting for over 1 million new cases and approximately 769,000 deaths in 2020, making it one of the leading causes of mortality 2 . Early symptoms of GC are often nonspecific and can be mistaken for common digestive disorders, resulting in delayed diagnosis and advanced disease stage at presentation 1 . Advanced gastric cancer (AGC) patients experience severe abdominal pain, hemorrhage, and melena, and surgical intervention may not be a suitable treatment option. Chemotherapy, such as the SOX regimen, becomes crucial in slowing down disease progression and improving quality of life 3 . The SOX regimen, comprising Oxaliplatin and Tegafur, is widely used as a first-line clinical treatment for AGC. Oxaliplatin targets DNA by forming platinum-DNA cross-links, thereby inhibiting tumor cell proliferation and differentiation 4 . Tegafur enhances the antitumor effect by increasing the concentration of 5-fluorouracil (5-FU) monophosphate, a phosphorylated metabolite of 5-FU, in the tumor 5 . However, despite the effectiveness of the SOX regimen, it may also lead to significant side effects, including nausea, vomiting, liver function damage, and peripheral neurotoxicity 6 . Therefore, exploring alternative treatment options that can complement or enhance the clinical effectiveness and reduce adverse events of conventional therapy is of great importance. Traditional Chinese medicine (TCM) has a long history of treating AGC, with reported effects including inhibiting the proliferation of AGC cells by promoting their apoptosis and reducing the toxic side effects of chemotherapy 7 . Chinese herbal injections (CHIs) are formulated by extracting active ingredients from Chinese medicine using modern science and technology. In contrast to Chinese medicine decoctions, CHIs can directly enter the circulatory system, which improves their effectiveness, onset time, and duration of action 8 . Currently, CHIs are commonly used with the SOX regimen in clinical practice in China to improve treatment effectiveness and reduce adverse reactions. While there are a variety of CHIs, there is insufficient evidence regarding their relative effectiveness, safety, and optimal combination with the SOX regimen for treating AGC. As a novel evidence-based medical statistical analysis method, Network Meta-Analysis (NMA) combines direct and indirect evidence to evaluate multiple treatments in a single analysis, expanding upon the principles of conventional meta-analysis 9 , 10 . NMA enables the simultaneous evaluation of various interventions, providing valuable information for clinical decision-making even when direct comparisons are not available 11 , 12 . Additionally, NMA allows us to rank each intervention based on its effectiveness and the probability of being the optimal treatment 13 . Hence, we performed a NMA to rank the clinical effectiveness and safety of different CHIs when combined with SOX chemotherapy regimens. The objective of this study was to provide evidence for the selection of appropriate CHIs in the treatment of patients with AGC. By doing so, we aimed to improve the clinical effectiveness and reduce the occurrence of adverse events in patients with AGC, thereby enhancing their overall treatment outcomes.
2. Methods This systematic review was reported by the Preferred Reporting Items for Systematic Review (PRISMA) 2020 guidelines 14 and the Cochrane Handbook for Systematic Reviews of Interventions 15 . In the Supplement S1 , we provide the PRISMA checklist. At PROSPERO, we have registered the protocol with registration number CRD42022383478. 2.1. Search strategy We applied the search strategy to eight databases: China National Knowledge Infrastructure, WanFang, Chinese Scientific Journal Database (VIP), SinoMed, PubMed, Cochrane Library, Excerpt Medica Database (Embase), and Web of Science (WOS). Medical Subject Headings (MeSH) and free-text words were combined. Supplement S2 lists the search strategies for the corresponding databases. Only Chinese and English studies were included. 2.2. Eligibility criteria (1) Study type: Published randomized controlled trials (RCTs). (2) Study subjects: Patients with a pathologically confirmed diagnosis of AGC and Tumor, Node, Metastasis (TNM) staging of III-IV 16 , regardless of age, gender, or race. (3) Intervention group: Patients with AGC who were treated with the SOX regimen (Oxaliplatin plus Tegafur) combined with CHIs. (4) Outcomes: The primary outcomes were clinical effectiveness and improvement rate measured with the Karnofsky Performance Status (KPS) score. The clinical effectiveness was based on the World Health Organization (WHO) effectiveness criteria for solid tumors 17 or Response Evaluation Criteria in Solid tumors (RECIST) criteria. The two criteria are consistent for complete response (CR) and partial response (PR) 18 . Total clinical effectiveness = ((number of CR + number of PR)/total number of cases). The KPS score is closely related to the patient's health status. By comparing the KPS score before and after treatment, an increase of ≥10 points after treatment was considered an improvement, a decrease of ≥10 points was rated as a deterioration, and a change of <10 points was classified as stable. The improvement rate of KPS was thus defined as the proportion of patients with KPS improvement (≥10 points) among all patients 19 . Secondary outcomes included leukopenia, thrombocytopenia, nausea and vomiting, liver function damage, peripheral neurotoxicity, and survival data. 2.3. Study selection Two reviewers independently read the titles, abstracts, and full texts to identify suitable studies and extracted data from eligible studies. A third reviewer checked and verified the database. If the data extracted by the two reviewers were inconsistent, the three reviewers would discuss agreement. The data we extracted are as follows: first author name, year of publication, sample size (number of AGC patients in the intervention group, number of AGC patients in the control group), mean age (mean age of the intervention group, mean age of the control group), treatment mode (intervention group treatment mode, control group treatment mode), drug doses, frequency of drug use, course of treatment, and outcome measures for the intervention and control groups. 2.4. Risk of bias and evidence quality assessment Two reviewers individually evaluated the included studies for bias with the Cochrane 2.0 Bias Risk Tool. After the evaluation, if there was any disagreement, a third reviewer was consulted, and the three reviewers worked together to determine the final bias evaluation result. We assessed the quality of the included studies as follows 20 : (a) bias during randomization, (b) bias of expected interventions, (c) bias of missing outcome data, (d) bias of outcome measurement, and (e) bias of selective outcome reporting. Each aspect was assessed on a scale of (a) low risk, (b) some concerns, and (c) high risk. Following guidance published on the Confidence in Network Meta-Analysis (CINeMA) website ( https://cinema.ispm.unibe.ch/ ), we evaluated the evidence from the included studies. CINeMA considers six aspects: within-study bias, reporting bias, indirectness, imprecision, heterogeneity, and incoherence 21 . 2.5. Statistical analysis Analyses were conducted with the R 3.6.1 gemtc package for Bayesian NMA. We used relative risk (RR) and 95% confidence intervals (CI) as measures of binary outcomes. We set the number of pre-iterations and the number of iterations to 20,000 and 50,000, respectively. Based on a trajectory plot, a density plot, and a Brooks-Gelman-Rubin diagnostic plot, we determined whether a satisfactory degree of convergence had been achieved. If RCTs presented excellent homogeneity in article design, intervention details, control details, and outcomes, a random-effects model was used for the analysis. If there was heterogeneity between the study results (I 2 > 50% or P < 0.1), further subgroup analysis was conducted. In addition, based on the R3.6.1 software, we sequentially completed data processing, a network evidence diagram, heterogeneity analysis, and forest plots. We calculated surface under the cumulative ranking curve (SUCRA) values for each outcome measure and different interventions and used an annular heat map to reflect the ranking of different treatments. STATA 17.0 was used for the detection of publication bias. We employed funnel plots to assess potential publication bias, and the Peters test was used for additional validation. There was no closed loop between intervention and control measures in all included studies.
3. Results 3.1. Study selection Using the search strategy, we identified 1456 studies from eight databases. Among them, 640 duplicates were found using EndNote X9.3.3. After filtering by study title and abstract, another 633 studies were removed. The full text of the 183 retained studies was screened, and 51 studies were identified for inclusion in our review. The flowchart is shown in Fig. 1 . 3.2. Study characteristics The 51 included studies reported on 3,703 patients with AGC who had a pathological diagnosis of gastric malignancy and were classified as TNM stage III-IV. Of these, 1,858 patients were in the intervention group, and 1,845 were in the control group. A total of 9 different CHIs were used in these 51 studies, including Aidi injections (ADI), Shenfu injections (SFI), Shenqifuzhen injections (SQFZI), Fufangkushen injections (FFKSI), Kangai injections (KAI), Kanglaite injections (KLTI), Huachansu injections (HCSI), Xiaoaiping injections (XAPI), and Huangqi injections (HQI). All CHIs are listed on the website of the China Drug Administration ( https://www.nmpa.gov.cn ). Furthermore, taxonomic validation of the species composition of all CHIs has been conducted on the following three websites ( http://mpns.kew.org/mpns-portal/ , http://www.plantsoftheworldonline.org , and https://www.catalogueoflife.org/ ). In Supplement S3 , we describe details of the composition, source, indications, and adverse reactions of these CHIs. Among the 51 included studies that were included, a total of 41 assessed clinical effectiveness, while 18 studies assessed the improvement rate of the KPS score. In addition, in terms of adverse events, 26 studies assessed the incidence of leukopenia, 22 studies that of thrombocytopenia, 23 studies mentioned nausea and vomiting, 19 studies investigated liver function damage, and 31 studies peripheral neurotoxicity. An analysis of 51 studies identified a prevalent SOX chemotherapy regimen. This regimen dictates the intravenous administration of 130 mg/m2 of Oxaliplatin once a day (qd) on the first day d1 of each 21-day treatment cycle. Tegafur, the regimen's second component, is orally administered. Its dosage is modulated according to the patient's body surface area (BSA). Specifically, each dosage is 40 mg for patients with a BSA less than 1.25 m2, 50 mg for those with a BSA from 1.25 m2 to 1.5 m2, and 60 mg for those exceeding 1.5 m2. Tegafur is taken twice daily (bid), starting from the first day and continuing for 14 days (d1~d14) of each cycle. Each treatment cycle lasts 21 days. It commences with the administration of Oxaliplatin and a 14-day course of Tegafur, followed by a 7-day intermission before the next cycle begins. Table 1 shows the basic features of the included studies. 3.3. Risk of bias in included studies In terms of bias during randomization, we found 27 studies that precisely described the generation of the random allocation sequences, of which 24 studies employed the random number table method, 1 study used the random drawing method, 1 study used the random envelope method, and 1 study used randomization according to odd and even numbers in the order of enrollment. 24 studies only mentioned randomization without further specifying the method. There was no baseline difference between all 51 studies. None of the 51 studies explicitly stated the use of a blinding method. All studies reported all outcome data. In terms of bias in outcome measures, 34 studies were found appropriate, and 17 were not informative. Data analysis of results was consistent with an analysis plan that had been predetermined before the acquisition of unblinded outcome data in 31 studies. In comparison, 20 studies were unspecific on this point. All said, 19 studies had a low risk of bias in terms of overall quality, while for the remaining 32 studies, there were some concerns about quality. The risk of bias for each study, and the overall summary risk of bias, are displayed in Fig. 2 . 3.4. Pairwise meta-analysis We performed a pairwise meta-analysis of all interventions for eight outcomes. Forest plots and heterogeneity analyses for pairwise meta-analysis of outcomes are presented in Supplement S4 . In terms of clinical effectiveness, we found that compared with the SOX regimen exclusively, the addition of CHIs had a RR = 1.37 (95% CI: 1.27 to 1.47, Z = 8.680, P < 0.001), while the improvement rate of the KPS score had a RR = 1.48 (95% CI: 1.32 to 1.65, Z = 6.923, P < 0.001). The incidence of leukopenia had a RR = 0.65 (95% CI: 0.58 to 0.73, Z = -7.211, P < 0.001), the incidence of thrombocytopenia a RR = 0.64 (95%CI: 0.54 to 0.77, Z= -4.913, P < 0.001), the incidence of nausea and vomiting a RR = 0.65 (95%CI: 0.55 to 0.77, Z = -5.214, P < 0.001), liver function damage a RR = 0.56 (95% CI: 0.44 to 0.71, Z = -4.789, P < 0.001), and the incidence of peripheral neurotoxicity a RR = 0.64 (95%CI: 0.56 to 0.74, Z= -6.474, P < 0.001). All pairwise comparisons of CHIs combined with the SOX regimen versus the SOX regimen alone were statistically significant. The results of the heterogeneity analyses show that most combinations of CHIs and SOX regimen were homogeneous (I 2 < 50% and P > 0.1) except for the incidence of nausea and vomiting (I 2 = 59.7%, P < 0.001). Specifically, there was significant heterogeneity between the use of ADI combined with SOX chemotherapy and SOX chemotherapy alone (I 2 = 48.9%). A comparison analysis of the seven studies involved revealed that four studies used antiemetic drugs during treatment, while three studies did not mention the use of antiemetic drugs. Therefore, we speculated that the use of antiemetic drugs during treatment may be the main source of heterogeneity. Subgroup analysis and meta-regression analysis were conducted based on whether antiemetic drugs were used, and the results are presented in Supplement S5 . Heterogeneity between the group that used antiemetic drugs and the group that did not use antiemetic drugs was significantly reduced after subgroup analysis, and the results of the meta-regression analysis showed that P < 0.05. Therefore, we believe that the use of antiemetic drugs may be the main cause of heterogeneity in the incidence of nausea and vomiting. We conducted leave-one-out sensitivity analyses for all studies and found that the results were robust and reliable (p< 0.05). The forest plot for sensitivity analysis can be seen in Supplement S6 . 3.5. Network meta-analysis The network structure diagram of all outcomes is shown in Fig. 3 . We tested the degree of model convergence for all outcomes. As can be seen from the trajectory plots and density plots in Supplement S7 , all chains overlap with each other, and the iterative process of each chain cannot be visually identified. All curves are close to a normal distribution, and the bandwidth values are stable. As can be seen from the Brooks-Gelman-Rubin diagnostic plot in Supplement S8 , the median and 97.5% values of the reduction factor all tend to be 1. The PSRF values are all 1. Thus, all outcome models have good convergence. The relative effect analysis of outcomes is displayed in Fig. 4 . We calculated SUCRA values for each outcome for different interventions and used an annular heat map to reflect the ranking of different treatments, as presented in Fig. 5 . 3.5.1. Primary Outcomes 3.5.1.1. Clinical effectiveness There were 41 studies with clinical effectiveness as the primary outcome, including 10 with ADI, 1 with SFI, 10 with SQFZI, 4 with FFKSI, 5 with KAI, 4 with KLTI, 2 with HCSI, and 5 with XAPI. Compared with SOX only, ADI+SOX had a RR = 1.46 (95% CI 1.28 to 1.69), SQFZI+SOX a RR = 1.29 (95% CI 1.1 to 1.5), FFKSI+SOX a RR = 1.46 (95% CI 1.15 to 1.89), KAI+SOX a RR = 1.54 (95% CI 1.22 to 2.02), and HCSI+SOX a RR = 1.64 (95% CI 1.17 to 2.37). These CHIs in combination with the SOX chemotherapy regimens significantly improved clinical effectiveness in AGC patients, with statistically significant differences. From the numerical results of SUCRA, it appeared that the intervention order from highest to lowest clinical effectiveness improvement was: HCSI+SOX (SUCRA: 78.17%) > SFI+SOX (SUCRA: 74.76%) > KAI+SOX (SUCRA: 64.16%) > FFKSI+SOX (SUCRA: 61.95%) > SQFZI+SOX (SUCRA: 38.02%) > KLTI+SOX (SUCRA: 34.18%) > XAPI+SOX (SUCRA: 24.32%) > SOX (SUCRA: 2.16%). 3.5.1.2. KPS score A total of 18 studies reported KPS scores, including 2 with ADI, 1 with SFI, 4 with SQFZI, 1 with KAI, 4 with KLTI, 1 with HCSI, 4 with XAPI, and 1 with HQI. Compared to SOX exclusively, SFI+SOX had a RR = 1.96 (95%CI 1.06 to 3.97), SQFZI+SOX a RR = 1.32 (95%CI 1.03 to 1.78), KLTI+SOX a RR = 1.8 (95% CI 1.28 to 2.57), XAPI+SOX a RR = 1.57 (95% CI 1.19 to 2.14) and HQI+SOX a [RR = 0.58 (95% CI 0.31 to 0.97). Other CHIs did not show statistical significance. The use of SFI+SOX may have the best effect on the KPS score improvement rate (SUCRA: 75.59%) while the use of SOX exclusively may have the least effect (SUCRA: 3.56%). 3.5.2. Secondary outcomes 3.5.2.1. Incidence of leukopenia A total of 26 studies reported the incidence of leukopenia, including 8 for ADI, 1 for SFI, 6 for SQFZI, 1 for FFKSI, 2 for KAI, 3 for KLTI, 1 for HCSI, 3 for XAPI, and 1 for HQI. The relative effects on the incidence of leukopenia were: Compared with SOX exclusively, ADI+SOX had a RR = 0.64 (95% CI 0.5 to 0.81), SFI+SOX a RR = 0.56 (95% CI 0.33 to 0.91), SQFZI+SOX a RR = 0.49 (95% CI 0.36 to 0.64), SQFZI+KLTI a RR = 0.61 (95% CI 0.4 to 0.9), FFKSI+HCSI a RR = 2.85 (95% CI 1.14 to 8.13), KAI+SOX a RR = 0.53 (95% CI 0.31 to 0.86), HCSI+SOX a RR = 0.29 (95% CI 0.11 to 0.64), HCSI+XAPI a RR = 0.39 (95% CI 0.15 to 0.93) and XAPI+SOX a RR = 0.74 (95% CI 0.53 to 0.99). The AGC patients who used HCSI+SOX had the highest probability of a reduced incidence of leukopenia (SUCRA: 93.35%) while the patients who used SOX exclusively had the lowest (SUCRA: 6.30%). 3.5.2.2. Incidence of thrombocytopenia A total of 22 studies reported the incidence of thrombocytopenia, including 6 with ADI, 1 with SFI, 4 with SQFZI, 3 with KAI, 2 with KLTI, 1 with HCSI, 4 with XAPI, and 1 with HQI. Compared with SOX exclusively, ADI+SOX had a RR = 0.6 (95% CI 0.38 to 0.95), SQFZI+SOX a RR = 0.47 (95% CI 0.27 to 0.73), SQFZI+KLTI a RR = 0.49 (95% CI 0.22 to 0.98), KAI+SOX a RR = 0.52 (95% CI 0.26 to 0.97), and HCSI+SOX a RR = 0.35 (95% CI 0.11 to 0.96). The SUCRA of AGC patients using HCSI+SOX indicated the greatest probability of reducing the incidence of thrombocytopenia (SUCRA: 80.19%), and the lowest probability was associated with SOX exclusively (SUCRA: 12.76%). 3.5.2.3. Incidence of nausea and vomiting Overall, 23 studies reported the incidence of nausea and vomiting, including 7 with ADI, 1 with SFI, 4 with SQFZI, 2 with FFKSI, 3 with KAI, 1 with KLTI, 1 with HCSI, 3 with XAPI, and 1 with HQI. The relative effect analysis results for the incidence of nausea and vomiting suggested that compared with SOX chemotherapy alone, ADI+SOX had a RR = 0.7 (95% CI 0.49 to 0.93), SFI+SOX a RR = 0.48 (95% CI 0.22 to 0.97], SQFZI+SOX a RR = 0.42 (95% CI 0.26,0.64), KAI+SOX a RR = 0.61 (95% CI 0.35 to 0.99), and HCSI+SOX a RR = 0.22 (95% CI 0.06 to 0.64). Compared with FFKSI+SOX, the use of SQFZI+SOX had a RR = 0.45 (95% CI 0.21 to 0.99). Compared with KLTI+SOX, the combination SQFZI+SOX had a RR = 0.40 (95% CI 0.18 to 0.86). Compared with HCSI+SOX, ADI+SOX had a RR = 3.16 (95%CI 1.01 to 12.38), FFKSI+SOX a RR = 4.26 (95% CI 1.18 to 18.16), and KLTI+SOX a RR = 4.81 (95% CI 1.35 to 21.05). Compared with XAPI+SOX, the combination HCSI+SOX had a RR = 0.31 (95% CI 0.08 to 0.99). Based on the rank probabilities, the best-performing treatment for reducing the incidence of nausea and vomiting was HCSI+SOX (SUCRA: 95.15%), followed by SQFZI+SOX (SUCRA: 82.10%). KLTI+SOX had minimal effectiveness (SUCRA: 14.68%) but was still better than SOX exclusively (SUCRA: 13.32%). 3.5.2.4. Incidence of liver function damage A total of 19 studies presented the incidence of liver function damage, including 5 with ADI, 2 with SQFZI, 3 with FFKSI, 3 with KAI, 2 with KLTI, 1 with HCSI, 2 with XAPI, and 1 with HQI. Compared with SOX alone, ADI+SOX had a RR = 0.42 (95% CI 0.21 to 0.8). As the results of SUCRA show, ADI+SOX might be the best choice for reducing the incidence of liver function damage (SUCRA: 75.16%), while SOX alone performed worst (SUCRA: 13.54%). 3.5.2.5. Incidence of peripheral neurotoxicity 31 studies reported the incidence of peripheral neurotoxicity, including 6 with ADI, 1 with SFI, 7 with SQFZI, 3 with FFKSI, 3 with KAI, 4 with KLTI, 2 with HCSI, 4 with XAPI, and 1 with HQI. Compared to SOX alone, ADI+SOX had a RR = 0.40 (95% CI 0.26 to 0.62), SFI+SOX a RR = 0.30 (95% CI 0.11 to 0.64), SQFZI+SOX a RR = 0.60 (95% CI 0.42 to 0.81), KAI+SOX a RR = 0.44 (95% CI 0.26 to 0.67), KLTI+SOX a RR = 0.74 (95% CI 0.57 to 0.96), HCSI+SOX RR = 0.36 (95% CI 0.15 to 0.75), and HQI+SOX a RR = 0.56 (95% CI 0.29 to 1). Compared with FFKSI+SOX, the combination ADI+SOX had a RR = 0.46 (95% CI 0.26 to 0.81), and SFI+SOX a RR = 0.34 (95% CI 0.12 to 0.79). Compared with KAI+SOX, the alternative FFKSI+SOX had a RR = 2.02 (95% CI 1.15 to 3.7). Compared with KLTI+SOX, the combination ADI+SOX had a RR = 0.55 (95% CI 0.32 to 0.90), SFI+SOX a RR = 0.40 (95% CI 0.14 to 0.91), and KAI+SOX a RR = 0.59 (95% CI 0.33 to 0.98). Compared with HCSI+SOX, the combination FFKSI+SOX had a RR = 2.46 (95% CI 1.11 to 6.22). Compared to XAPI+SOX, the alternative ADI+SOX had a RR = 0.48 (95% CI 0.27 to 0.86), SFI+SOX a RR = 0.35 (95% CI 0.12 to 0.84), KAI+SOX a RR = 0.52 (95% CI 0.28 to 0.95), and HCSI+SOX a RR = 0.42 (95% CI 0.17 to 0.97). From the numerical results of SUCRA, SFI+SOX achieved the best reduction in the incidence of peripheral neurotoxicity (SUCRA: 88.26%), and SOX exclusively performed worst (SUCRA: 4.95%). 3.5.2.6. Survival data We identified survival data in a total of 15 studies. However, due to the lack of standard deviation reporting in some of these studies, we were unable to conduct NMA. Instead, we present the average values of median survival time (MST), median progression-free survival (mPFS), and time to progression (TTP) in Supplement S9 . Our results demonstrate that the addition of CHIs to the SOX chemotherapy regimen significantly improved MST, mPFS, and TTP compared to the SOX chemotherapy regimen alone. These findings suggest that combining CHIs and chemotherapy may have a positive impact on improving patient survival outcomes. 3.6. Publication bias The funnel plot for all outcomes is presented in Fig. 6 . The dispersion points in all funnel diagrams were distributed symmetrically. The clinical effectiveness reached P = 0.7286 in the Peters test, the improvement rate of KPS score P = 0.0909, incidence of leukopenia P = 0.5246, incidence of thrombocytopenia P = 0.5574, incidence of nausea and vomiting P = 0.0508, incidence of liver function damage P = 0.1304, and incidence of peripheral neurotoxicity P = 0.3807. The results of the Peters test indicated no publication bias for all outcomes (P > 0.05). 3.7. Confidence in evidence According to CINEMA, most confidence ratings results were “low”, and a few confidence ratings results were “moderate”. Since the network has no closed-loop evidence, the inconsistency cannot be evaluated. Therefore, levels of "incoherence" were all evaluated as "Some concerns". The specific evaluation results are included in the Supplement CINeMA results.xls .
4. Discussion The SOX regimen is frequently utilized as the primary chemotherapy regimen for AGC in clinical practice. Yamada et al. have demonstrated that the SOX regimen displays significantly higher rates of progression-free survival and overall survival compared to S-1 plus cisplatin 22 . A multicenter, randomized clinical trial conducted in 12 Chinese centers revealed that the SOX regimen is non-inferior to FOLFOX (Fluorouracil plus Oxaliplatin plus Leucovorin) as perioperative chemotherapy for patients with locally AGC. Therefore, SOX should be considered as a viable alternative treatment option for these patients in Asia. Additionally, the study identified a lower incidence of gastrointestinal toxicities (e.g., anorexia or nausea) in the SOX group compared to the FOLFOX group 23 . Bando et al. reported that SOX is an effective and feasible treatment for both nonelderly and elderly patients with AGC. In elderly patients, SOX exhibited favorable efficacy and safety compared to S-1 plus cisplatin 24 . Nonetheless, the long-term use of chemotherapy drugs can induce side effects such as bone marrow suppression, gastrointestinal reactions, and peripheral neurotoxicity, thereby reducing patients' tolerance and potentially resulting in treatment discontinuation 25 , 26 . Some studies have shown that TCM combined with chemotherapy regimens can significantly improve clinical effectiveness and quality of life while reducing adverse reactions 27 , 28 . CHIs, as a modality of TCM, offer the advantages of convenience and rapid absorption through intravenous administration 29 . Consequently, the combined utilization of CHIs and the SOX regimen is becoming increasingly popular in the treatment of AGC. To comprehensively evaluate the clinical effectiveness and safety of various combinations of CHIs with the SOX regimen, this study conducted NMA. We included 51 studies with 3,703 AGC patients involving 9 types of CHIs in NMA. A random effects model was used for data analysis and there was low heterogeneity among the 51 included studies. According to the NMA results, compared to other CHIs combined with SOX chemotherapy, HCSI plus SOX had the highest rank in improving clinical effectiveness (SUCRA: 78.17%), reducing the incidence of leukopenia (SUCRA: 93.35%), thrombocytopenia (SUCRA: 80.19%), and nausea and vomiting (SUCRA: 95.15%). HCSI is a water-soluble preparation primarily extracted and refined from Bufo gargarizans [Bufonidae], containing various compounds such as bufadienolides, indole alkaloids, steroids, and bufotenine amides 30 . Among them, bufadienolides are the main active anti-tumor ingredient 31 . Bufadienolides can regulate the protein levels of cell cycle proteins and cyclin-dependent kinases (CDKs), leading to tumor cell arrest at the G2/ M phase. Additionally, bufadienolides can increase the expression of the pro-apoptotic gene Bax and decrease the expression of the anti-apoptotic gene Bcl-2 , leading to up-regulation of the apoptotic-related protein caspase and promotion of tumor cell apoptosis 32 - 34 . In vitro experiments by Wang and colleagues 35 on AGC cells confirmed that bufadienolides can effectively inhibit the proliferation, invasion, and metastasis of AGC cells. The use of HCSI controlled the progression of the disease, leading to the remission of symptoms and signs in advanced patients, thereby improving clinical effectiveness. Furthermore, HCSI exerts a positive impact on the cellular and humoral immune functions in patients with advanced tumors. It enhances the phagocytic activity of macrophages, improves the activity of natural killer cells and T cell subsets, and strengthens overall immune function. As a result, it reduces the incidence of leukopenia 36 - 39 . Our NMA analysis demonstrated that HCSI significantly reduced the incidence of thrombocytopenia (SUCRA: 80.19%). This could be attributed to HCSI's ability to improve coagulation function. Specifically, an RCT validated our findings by demonstrating that HCSI treatment significantly enhances coagulation function and improves survival quality in patients with AGC 40 . Furthermore, our research found the remarkable effectiveness of HCSI in reducing nausea and vomiting (SUCRA: 95.15%). However, the limited number of published studies on this topic indicates a potential area for future research. According to our data analysis results, compared to other CHIs, SFI plus SOX was the best option for improving the KPS score (SUCRA: 75.59%) and reducing the incidence of peripheral neurotoxicity (SUCRA: 88.26%). SFI is a water-soluble preparation mainly composed of Panax ginseng C.A. Mey. [Araliaceae] and Aconitum carmichaeli Debeaux [Ranunculaceae] 41 . Its chemical components mainly include ginsenosides, and aconite alkaloids 42 . Ginsenosides have effects including enhancing T cell proliferation, inhibiting apoptosis, and indirectly inhibiting the growth of tumor cells 43 . Ginsenosides can improve the immunity of mice, reduce the expression of PD-L1 induced by chemoresistance, and restore the cytotoxicity of T cells toward cancer cells 44 . Aconite alkaloids can achieve anti-tumor effects by reducing the spreading ability of cancer cells. Its mechanism may be related to the inhibition of the activation of the P38MAPK signaling pathway 45 . Animal experiments conducted by Liu et al. 46 have shown that aconite alkaloids can significantly inhibit the transformation function of mouse T lymphocytes, while significantly inhibiting the secretion of IL-l and TNF-α from peritoneal macrophages, and also have a significant inhibitory effect on the expression of CD91 and CD13 on macrophages, thereby regulating the immune function of the body. Clinical studies have found that Shenfu injections can improve the immunity of patients and significantly improve their quality of life 47 . In addition, Wei et al. 48 have confirmed that SF I can reduce the peripheral neurotoxicity of oxaliplatin. Therefore, SFI has the potential to improve the quality of life of patients and peripheral neurotoxicity. ADI is composed of Panax ginseng C.A. Mey. [Araliaceae], Eleutherococcus senticosus (Rupr. and Maxim.) Maxim [Araliaceae], Astragalus membranaceus (Fisch.) Bunge [Fabaceae] and Harmonia axyridis (Pallas) [Coccinellidae] 49 . ADI chemical components include ginsenosides, Eleutherococcus senticosus glycosides, Astragalus saponins, and Buthus martensii toxin. Ginsenosides and Eleutherococcus senticosus glycosides have good antioxidant effects 50 . Buthus martensii extract has a dual effect of anti-tumor and immune regulation and can protect liver cells while having anti-tumor properties 51 . Therefore, ADI plus SOX was the best-performing choice for reducing the incidence of liver function damage (SUCRA: 75.16%). In addition to RCTs demonstrating the clinical effectiveness and reduced adverse events of CHIs in combination with the SOX for AGC, real-world studies have also been conducted. The analysis and processing of real-world clinical data are pivotal for transforming the individualized empirical laws of TCM into sophisticated medical evidence 52 . For instance, an observational study by Ai et al., which involved 71 patients of AGC revealed that the combination of SOX chemotherapy regimen and ADI reduced patients' vascular endothelial growth factor levels, ameliorated cancer-related fatigue, and boosted immune function 53 . Similarly, Gao Y administered SQFZI in combination with SOX to 40 patients with AGC. The study indicated that this treatment significantly improved the disease control rate, was safe, and considerably enhanced the quality of survival 54 . Nevertheless, it is important to recognize that extant real-world studies on the SOX combined with CHIs primarily rely on small samples. This approach may engender certain issues. Firstly, the results derived from small-sample studies may lack stability and be subject to randomness. Secondly, such studies may not adequately represent the treatment response and disease progression in large patient groups, thus limiting the generalizability and representativeness of their findings. Lastly, due to the restricted sample size, certain potential therapeutic impacts or adverse reactions may remain undetected, possibly influencing the refinement and enhancement of treatment strategies 55 , 56 . We anticipate more large-sample real-world studies in the future to augment the reliability and representativeness of the findings. We also aspire for the ongoing improvement and innovation of research methodologies, such as the incorporation of more advanced data analysis and processing tools, to better explore and utilize real-world clinical data. This would further bolster the capability of TCM in treating AGC and pave the way for personalized and precise medical services. Our NMA has particular strengths. We employed the Bayesian model, which is the most applicable approach for conducting multiple-intervention NMA, to evaluate the clinical effectiveness of CHIs in combination with SOX for the treatment of AGC. This application of the Bayesian model addressed the lack of direct comparisons between CHIs and revealed a favorable intervention through ranking analysis of various outcomes. Moreover, a thorough search and a pre-defined inclusion criterion were implemented to minimize clinical heterogeneity to the greatest extent possible 57 . In terms of clinical research, we closely followed current clinical treatment trends. Based on published RCTs, our NMA included 9 commonly used CHIs for the treatment of AGC. We conducted subgroup analysis, meta-regression analysis, and sensitivity analysis to discuss the sources of heterogeneity. We used funnel plots to detect publication bias and further validated results with the Peters test. We conducted SUCRA rankings for each treatment measure and performed statistical analyses to determine the statistical significance of the comparisons between different treatment measures. The implementation of NMA followed the PRISMA-NMA guidelines. Furthermore, we conducted a comprehensive evaluation of evidence for the comparisons between multiple treatment measures for each outcome. However, our study has certain limitations. In terms of the methodological evaluation of the included 51 studies, we found that none of the included studies mentioned whether participants and outcome assessors were blinded, which may result in bias and lack of objectivity. In addition, 10 studies did not report clinical effectiveness and selective reporting cannot be ruled out. The small sample size of the different CHIs reduces the stability and accuracy of the results. Furthermore, none of the 51 studies included in our analysis had registered clinical trial protocols. However, registering clinical trial protocols is crucial not only for fulfilling ethical obligations towards participating subjects and researchers but also for providing reference information to patients and physicians. Additionally, it plays a pivotal role in mitigating publication bias in medical literature research. Moreover, registering clinical trial protocols aids medical editors in understanding trial results, promotes effective investment and allocation of research funds, and assists ethics practitioners in assessing the appropriateness of study 58 . Moving forward, we anticipate that clinical trials investigating the combination of chinese herbal injections with the SOX chemotherapy regimen for the treatment of AGC will prioritize the registration of trial protocols before implementation. This step will ensure transparency throughout the design, execution, and completion of the trials, thereby guaranteeing the traceability of the studies. Since the network has no closed-loop evidence, inconsistency could not be assessed. The CINeMA shows that most confidence rating results were “low”. At the same time, the results cannot be extrapolated as the included studies are all from Chinese studies and the participants are all of Chinese heritage. Therefore, we propose in the future to generate larger and more methodologically rigorous RCTs for CHIs in combination with the SOX chemotherapy regimen, including in different countries. Furthermore, it is crucial to conduct more pharmacological studies to further verify the safety of CHIs in the treatment of patients with AGC. While none of the included studies discussed health economic aspects, we also encourage such studies to understand the price and treatment effects of CHIs, and tailor appropriate treatment plans for patients according to their conditions and taking into account economic considerations 59 , thereby supporting the selection of the optimal solution for AGC patients according to the clinical effectiveness, safety and economy of CHIs.
Conclusion In conclusion, CHIs in combination with SOX have demonstrated a positive effect on the treatment of AGC patients compared to the use of SOX alone. HCSI and SFI injections potentially have the most pronounced integrated advantage of all CHIs. ADI can be considered the optimal choice for reducing the incidence of liver function damage. More methodologically rigorous RCTs with larger sample sizes and additional pharmacological studies are needed to support this evidence. Health economic studies of CHIs should also be conducted to select the optimal solution for AGC patients based on clinical effectiveness, safety, and economy.
Competing Interests: The authors have declared that no competing interest exists. Background: Randomized controlled trials (RCTs) have demonstrated that combining Chinese herbal injections (CHIs) with oxaliplatin plus tegafur (SOX) chemotherapy regimens improves clinical effectiveness and reduces adverse reactions in patients with advanced gastric cancer (AGC). These RCTs highlight the potential applications of CHIs and their impact on AGC patient prognosis. However, there is insufficient comparative evidence on the clinical effectiveness and safety of different CHIs when combined with SOX. Therefore, we performed a network meta-analysis to rank the clinical effectiveness and safety of different CHIs when combined with SOX chemotherapy regimens. This study aimed to provide evidence for selecting appropriate CHIs in the treatment of patients with AGC. Methods: We searched eight databases from their inception until March 2023. Surface Under the Cumulative Ranking Curve (SUCRA) probability values were used to rank the treatment measures, and the Confidence in Network Meta-Analysis (CINeMA) software assessed the grading of evidence. Results: A total of 51 RCTs involving 3,703 AGC patients were identified. Huachansu injections + SOX demonstrated the highest clinical effectiveness (SUCRA: 78.17%), significantly reducing the incidence of leukopenia (93.35%), thrombocytopenia (80.19%), and nausea and vomiting (95.15%). Shenfu injections + SOX improved Karnofsky's Performance Status (75.59%) and showed a significant reduction in peripheral neurotoxicity incidence (88.26%). Aidi injections + SOX were most effective in reducing the incidence of liver function damage (75.16%). According to CINeMA, most confidence rating results were classified as “low”. Conclusion: The combination of CHIs and SOX shows promising effects in the treatment of AGC compared to SOX alone. Huachansu and Shenfu injections offer the greatest overall advantage among the CHIs, while Aidi injections are optimal for reducing the incidence of liver damage. However, further rigorous RCTs with larger sample sizes and additional pharmacological studies are necessary to reinforce these findings.
Supplementary Material
Funding This study was supported by the National Natural Science Foundation of China under the Key Project Grant No. 81830115, titled "Key Techniques and Outcome Research for the Therapeutic Effect of Traditional Chinese Medicine as a Complex Intervention Based on a Holistic System and Pattern Differentiation & Prescription". Ethical statement This study is a systematic review of published RCTs and therefore does not require ethical approval. Author contributions Zhijun Bu : Conceptualization, Methodology, Software, Validation, Formal analysis Investigation, Resources, Data curation, Writing - Original Draft, Visualization, Project administration. Shurun Wan : Methodology, Software, Validation, Formal analysis Investigation, Resources, Data curation. Peter Steinmann : Writing - Original Draft, Writing - Review & Editing. Zetao Yin : Methodology, Software, Formal analysis Investigation, Resources. Jinping Tan : Methodology, Software, Formal analysis Investigation, Resources. Wenxin Li : Methodology, Software, Formal analysis Investigation, Resources. Zhenyan Tang : Methodology, Software, Validation, Formal analysis Investigation, Resources. Shuo Jiang : Methodology, Software, Formal analysis Investigation, Resources. Mengmeng Ye : Methodology, Software, Formal analysis Investigation, Resources. Jinyang Xu : Methodology, Software, Formal analysis Investigation, Resources. Youyou Zheng : Validation, Data curation. Xuehui Wang : Validation, Data curation. Jianping Liu : Supervision, Project administration, Funding acquisition. Zhaolan Liu : Supervision, Project administration, Funding acquisition. Data availability The data analyzed in this study have been included in the supplementary materials. If you need further access to the data or have any related inquiries, please feel free to contact the corresponding author. Study registration This study was registered on PROSPERO (CRD42022383478).
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):889-907
oa_package/97/dc/PMC10788720.tar.gz
PMC10788721
0
Introduction Liver hepatocellular carcinoma (LIHC) is the most common type of primary liver cancer, accounting for 90% of liver cancers 1 , 2 . Chronic infection due to hepatitis B and C viruses is a common risk factor for LIHC, which has become the cancer with the highest recurrence rate worldwide 1 - 3 . Additionally, obesity, diabetes, alcohol consumption, and other risk factors for liver injury can further promote the development of LIHC 3 , 4 . The etiology of LIHC is closely related to environmental factors and requires adaptation to changing environmental conditions, in which epigenetic aberrations play a critical role in the development and progression of LIHC 4 . DNA methylation and acetylation, alterations in microRNAs and long noncoding RNAs (lncRNAs), and chromatin modifications are the most common epigenetic modifications that also lead to changes in the liver epigenome 4 , 5 . The accumulation of these epigenetic alterations leads to carcinogenesis, progression, and metastasis. LncRNAs are defined as noncoding RNAs greater than 200 nucleotides in length 6 . LncRNAs mainly include enhancer RNAs, sense or antisense transcripts, and intergenic transcripts 6 , 7 . LncRNAs are thought to have multiple functions, including the organization of nuclear structural domains, transcriptional regulation, and regulation of protein or RNA molecules 7 . However, the biological processes of the vast majority of lncRNAs remain unknown. Receptor tyrosine kinases (RTKs) are a family of signaling proteins in which growth factor RTK-mediated cell signaling pathways are essential in maintaining normal physiological functions 8 . However, their aberrant activation promotes tumor development 9 . Currently, epidermal growth factor receptor (EGFR) is one of the most studied RTK signaling proteins and is closely associated with the development of multiple human tumors 10 , 11 . The epidermal growth factor receptor pathway substrate 15 (EPS15) was originally identified as a substrate for the EGFR signaling pathway 12 . Notably, in acute myelogenous leukemias, the EPS15 gene was found to rearrange at t (1;11) (p32, q23), suggesting a role for EPS15 in tumorigenesis and development 13 . In addition, Eps15 was also found to be involved in endocytosis and cell growth regulation 14 . Therefore, EPS15 may affect the signaling efficiency of EGFR and be involved in the development of some tumors. LncRNAs can be categorized into five classes, based on their relative position to nearby coding genes: antisense lncRNAs, intronic lncRNAs, intergenic lncRNAs, bidirectional lncRNAs, and promoter-associated lncRNAs, which regulate genes expression in very different ways 7 , 15 . Antisense lncRNAs are transcribed from the antisense strand of a gene (usually a protein-coding gene) and overlap with the mRNA of the gene 15 . The presence and positional specificity of this naturally occurring antisense lncRNA suggest that it tends to act more closely with the sense strand than with target genes in general 16 . According to the current study, the mechanisms by which AS-lncRNAs affect gene expression on the sense strand can be divided into three categories 16 : 1) The transcription process of AS-lncRNAs represses sense-strand gene expression. 2) AS-lncRNAs bind to DNA or histone-modifying enzymes and regulate the epigenetics of sense-strand genes, thereby affecting gene expression. 3) AS-lncRNAs bind to sense-strand mRNA through base complementary pairing and affect variable splicing of mRNA, thereby affecting protein translation and function. LncRNA EPS15-antisense1 (EPS15-AS1) is an antisense lncRNA of EPS15, which has been reported to inhibit EPS15 expression and induce apoptosis 17 . However, the role of EPS15-AS1 in LIHC and the mechanism are still unclear. Ferroptosis is a novel type of programmed cell death triggered by iron-dependent lipid peroxidation, ultimately leading to cell membrane damage 18 , 19 . Uncontrolled lipid peroxidation is a significant feature of ferroptosis, resulting from the interaction between the ferroptosis-inducing and defense systems 19 . Ferroptosis is activated when the promoters of ferroptosis significantly exceed the antioxidant capacity of the defense system 19 . Some oncogenes and oncogenic signaling can activate the antioxidant or ferroptosis defense system, favoring tumorigenesis, progression, metastasis, and resistance 20 , 21 . Therefore, this study aimed to analyze the expression of EPS15-AS1 and EPS15 in LIHC and to investigate whether EPS15-AS1 has the ability to regulate EPS15 and the sensitivity of LIHC to ferroptosis.
Materials and Methods Cell Culture A total of three cell lines, Huh7, HepG2, and HL7702, were used in the current study. HL7702 is a normal human hepatocyte cell line, and Huh7 and HepG2 are human LIHC cell lines. All cell lines were purchased from the Shanghai Institute of Biochemistry and Cell Biology (SIBCB) and cultured in Dulbecco's modified Eagle's medium (DMEM) (HyClone, USA) containing 10% fetal bovine serum (FBS) (Gibco, USA). All cells were cultured at 37 °C and 5% CO2 in a humid incubator (Thermo Fisher, USA). When the cultured cells were fused to 80-90%, cells were digested with 0.25% trypsin (NCM Biotech, China) and passaged in 1 to 3 passages. Western Blot Analysis After incubation under different intervention conditions, all cells were collected and lysed using RIPA lysis buffer (NCM Biotech, China). After 3 minutes of lysis, the lysates were centrifuged at 12,000 rpm for 10 minutes, and the supernatant was collected for western blot analysis. The protein concentrations were quantified using the BCA kit (NCM Biotech, China) to keep the total amount of protein consistent across the different experimental groups. Finally, 20 μg of protein per group was used for western blot analysis. 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (10% SDS-PAGE) (Vazyme, China) was applied to separate the protein, and then the protein was transferred to nitrocellulose membranes (Millipore, USA) at 300 mA for 1 hour. The nitrocellulose membranes containing protein were blocked with 5% nonfat powdered milk (Beyotime, China). Then, the membranes were incubated with the corresponding primary antibody at 4 °C for 12 h. The primary antibodies against EPS15 (dilution ratio, 1:1000), β-Actin (dilution ratio, 1:20000) and AKR1B1 (dilution ratio 1:1000) were purchased from ABclonal (#A9814, #AC038, #A18031). Next, the nitrocellulose membranes were washed with TBS-Tween and incubated with secondary antibody (HRP-conjugated goat anti-rabbit IgG, ABclonal, #AS014, 1:10,000). Finally, the chemiluminescent HRP substrate (NCM Biotech, China) was applied for imaging, and the image was detected by a chemiluminescence detection system (Bio-Rad, USA). Real-Time Quantitative PCR Analysis (RT‒qPCR) In the current study, total RNA was collected with a total RNA isolation kit (Vazyme, # RC101-01), and 500 ng of RNA was reverse transcribed into cDNA using RT SuperMix (Vazyme, #R233-01). SYBR qPCR Master Mix (Vazyme, #Q321-02) was applied to perform RT‒qPCR, and GAPDH was chosen as an internal control. Then, the 2^-(∆∆Ct) method was used to calculate the relative expression levels of EPS15-AS1 and EPS15. The reaction conditions were set as follows: 95.0 °C for 3 minutes and then 40 cycles of 95.0 °C for 5 seconds, 60.0 °C for 30 seconds and 72.0 °C for 30 seconds. All primer sequences were as follows: GAPDH F: 5'-CATCACTGCCACCCAGAAGACTG-3' R: 5'-ATGCCAGTGAGCTTCCCGTTCAG-3'; EPS15 F: 5'-ACCTTCACTTAGGCCCCTGT-3' R: 5'-CCCTTACCCTCACTCAACCA-3'; EPS15-AS1 F: 5'-ACCCCAAAGCCTCTTGATTT-3' R: 5'-CGTCTCCTCAGACGGTTCTC-3'. Invasion Assay The trans-well chamber used in the current study was purchased from NEST Biotech (China, #725201). HepG2 cells were digested and resuspended at a concentration of 200,000/ml, and then, 100 μL of the cell suspension was seeded in each upper chamber of the trans-well. In addition, 500 μL of DMEM containing 10% FBS was added to the lower chamber of the trans-well. Finally, the cells were cultured for 24 hours, and the trans-well chamber with cells was collected and stained with 0.1% crystal violet solution. Wound Healing Assay HepG2 cells (1 × 10 5 per well) were seeded in 6-well plates for the migration assay. Until the cells were fused to 90-95%, the monolayer of cells was scratched with a 200 μL plastic tip. Then, the cells were rinsed three times with PBS and cultured with DMEM containing 5% FBS for 12 hours. Images were taken at 0 and 12 hours for analysis of migration distance: migration distance = (initial wound width - wound width at each time point)/2 (μm). Flow Cytometry Mitochondrial membrane potential staining was performed using the JC-1 staining kit, which was purchased from Beyotime Biotech, China (#C2006). Lipid peroxidation was detected using a lipid peroxidation probe-BDP 581/591 C11 kit (Dojindo, Japan, #L267). In addition, intracellular Fe 2+ ions were detected with an iron ion detection probe-FerroOrange kit (Dojindo, #F374), and an Annexin V-FITC/PI Kit (Dojindo, #AD10) was used to detect the percentage of cells with damaged cell membranes. All staining was performed according to the corresponding manufacturer's instructions. Transfection and Construction of Overexpression Cell Lines The expression vector used in the current study was pcDNA3.1, and Lipofectamine 2000 (Thermo Fisher, USA) was used to transfect pcDNA3.1. In addition, the three overexpression plasmids, including overexpression EPS15 (OE_EPS15), overexpression EPS15-AS1 (OE_EPS15-AS1), and overexpression AKR1B1 (OE_AKR1B1), were all purchased from Sangon Biotech (Shanghai, China). The overexpression plasmids and Lipofectamine 2000 were mixed separately with 50 μl of DMEM and left to stand for 5 minutes. Then, the plasmid and Lipofectamine 2000 were mixed and incubated for 20 minutes at room temperature, and the transfection complex was immediately added to the HepG2 culture plate. Then, the HepG2 cells and plasmids were cultured together for 24 hours, switched to normal DMEM containing 10% FBS, and cultured for another 24 hours. After obtaining overexpression cell lines, gene expression levels were examined using RT‒qPCR and western blot analysis. Online Databases and Bioinformatics Analysis The GEPIA2 online analysis tool ( http://gepia2.cancer-pku.cn/#index ) is a tool for analyzing The Cancer Genome Atlas (TCGA) database and was used to perform survival analysis and to compare the expression of EPS15 and AKR1B1 in LIHC tissues and adjacent normal tissues. To find the correlation between EPS15 and ferroptosis-associated proteins, an interaction network between EPS15 and ferroptosis-associated proteins was constructed by using STRING ( https://cn.string-db.org/ ), which is a protein‒protein interaction network functional enrichment analysis website. In addition, ferroptosis-associated protein was obtained from FerrDb ( http://www.zhounan.org/ferrdb/current/ ), a database that summarizes the latest ferroptosis-associated markers and genes. Cytoscape 3.10.0 was used to show the interaction network diagram. Statistical Analysis GraphPad Prism (version 9.0) was applied to conduct statistical analysis. The mean ± standard deviation was calculated to describe continuous variables. A t test was used to compare the two groups, and one-way ANOVA followed by Dunnett's multiple comparisons test was used for statistical analysis among multiple groups. P < 0.05 was considered to indicate a significant difference.
Results EPS15-AS1 expression was decreased in LIHC cells The EGFR signaling pathway is one of mammalian cell physiology's most important signaling pathways 10 . It promotes tumorigenesis mainly by affecting tumor cell proliferation, angiogenesis, tumor invasion, and metastasis. Aberrant activation of EGFR signaling pathways is one of the mechanisms of tumor development. It has been reported that the EPS15 gene encodes a protein part of the EGFR signaling pathway 12 . In this study, according to the data of LIHC tissues in the TCGA database, we observed that the expression of EPS15 in LIHC tissue was higher than that in normal liver tissue (p = 0.055) ( Figure 1 A ). Additionally, the patients with high expression of EPS15 had a lower survival rate than those with low EPS15 expression (log rank p = 0.059) ( Figure 1 B ). Comparison between the normal hepatocyte cell line HL7702 and the LIHC cell lines HepG2 and Huh7 further revealed that the gene transcription and protein expression levels of EPS15 were higher in LIHC cells than in normal hepatocyte cells ( Figure 1 C and 1D ). Interestingly, we also found that the transcription level of the lncRNA EPS15-AS1 was significantly decreased in HepG2 and Huh7 cells compared with that in HL7702 cells ( Figure 1 E ). Therefore, these results suggest that the expression level of EPS15 is closely related to the development of LIHC and that EPS15-AS1 may be involved in the regulation of EPS15 expression during the development of LIHC. EPS15-AS1 inhibited LIHC cell activity by decreasing EPS15 expression Antisense lncRNAs are transcribed from the antisense strand of a protein-coding gene and overlap with the mRNA of the gene, and this structure of antisense lncRNAs provides the basis for the regulation of gene expression 16 . Thus, we hypothesize that EPS15-AS1 can modulate EPS15 expression in LIHC cells, which in turn affects the invasiveness of LIHC cells. To verify the effects of EPS15-AS1 in HepG2, overexpression of EPS15-AS1 was performed using the pcDNA3.1 plasmid. RT‒qPCR and western blotting analysis showed that EPS15 transcripts were significantly reduced in the EPS15-AS1 overexpression group (OE_EPS15-AS1), and the level of EPS15 proteins was also decreased ( Figure 2 A and 2B ). In addition, invasion assays showed that the number of cells passing through the trans-well chambers was reduced in the OE_EPS15-AS1 group ( Figure 2 C and 2D ), and wound healing assays also showed a significant decrease in the migratory ability of the OE_EPS15-AS1 group compared with the control group (vector group) ( Figure 2 E and 2F ). These results suggest that EPS15-AS1 can inhibit LIHC cell activity by affecting the expression of EPS15. Next, to verify whether EPS15-AS1 inhibits LIHC cell activity mainly by affecting EPS15 expression, we overexpressed EPS15 and EPS15-AS1 together. In the following experiment, the intervention was divided into four groups: (1) Vector (control group), (2) OE_EPS15-AS1 (EPS15-AS1 overexpression group), (3) OE_EPS15 (EPS15 overexpression group) and (4) OE_EPS15 + OE_EPS15-AS1 (EPS15 and EPS15-AS1 overexpression group). RT‒qPCR results showed that EPS15 expression increased in the OE_EPS15 group and decreased in the OE_EPS15-AS1 group ( Figure 3 A ). In the OE_EPS15 + OE_EPS15-AS1 group, we found that the expression of EPS15 was not significantly different from that in the vector group ( Figure 3 A ). Moreover, western blot analysis further validated the RT‒qPCR results that EPS15 was not significantly different between the vector group and the OE_EPS15 + OE_EPS15-AS1 group ( Figure 3 B ). Finally, both the invasion and wound healing assays confirmed that elevated EPS15 promoted the invasiveness of hepatocellular carcinoma, but overexpression of EPS15-AS1 inhibited HepG2 activity by suppressing EPS15 expression ( Figure 3 C-F ). Therefore, all these results suggest that EPS15 has the ability to promote LIHC cell invasiveness, whereas overexpression of EPS15-AS1 can inhibit LIHC cell activity and invasiveness by downregulating EPS15 expression. EPS15-AS1 increases the susceptibility of LIHC cells to ferroptosis During the previous experiments, we observed that the cellular status became significantly worse with overexpression of EPS15-AS1 or inhibition of EPS15 expression. Moreover, we also found significant changes in intracellular Fe 2+ ion levels ( Figure 4 A ), leading us to suspect that EPS15 may influence the relationship between LIHC and ferroptosis. As shown in Figure 4 A , intracellular Fe 2+ increased in the OE_EPS15-AS1 group and decreased in the OE_EPS15 group compared with the Vector group. Ferroptosis is an iron-dependent programmed cell death characterized by mitochondrial dysfunction and uncontrolled lipid peroxidation. JC-1 is a mitochondrial membrane potential staining reagent, and BDP is a lipid peroxidation probe. As shown in Figures 4 B and 4C , overexpression of EPS15-AS1 significantly promoted lipid peroxidation and mitochondrial dysfunction in HepG2 cells, whereas overexpression of EPS15 attenuated the effects of EPS15-AS1. Finally, to observe whether the changes in mitochondria and lipids would eventually lead to cell death, propidium iodide (PI) staining was performed. PI is an agent that can bind to DNA and usually cannot pass through normal living cell membranes but can pass through damaged cell membranes or dead cells. As shown in Figure 4 D , overexpression of EPS15-AS1 led to ferroptosis in LIHC cells, whereas expression of EPS15 alleviated the ferroptosis induced by overexpression of EPS15-AS1. Moreover, when OE_EPS15-AS1 cells were treated with ferroptosis inhibitors Ferrostain-1 and Deferasirox, the percentage of dead cells was decreased in the Ferrostain-1 and Deferasirox groups ( Figure S1 ). These results indicated that EPS15-AS1 increases the susceptibility of LIHC cells to ferroptosis by inhibiting the transcription of EPS15. EPS15 enhances LIHC cell activity by promoting the expression of AKR1B1 To investigate the mechanism between EPS15 and ferroptosis, an interaction network between EPS15 and ferroptosis-associated proteins was constructed ( Figure 5 A ), and according to the interaction network, EGFR, ARF6, GJA1, NEDD4, TFRC, UBC, and TFAP2A were significantly correlated with EPS15 (marked by red circles in Figure 5 A ). Interestingly, EGFR is also a ferroptosis-related protein and is associated with a large number of other ferroptosis-associated proteins in the network, as shown in Figure 5 A , where interacting straight lines cluster around EGFR. We then constructed a subnetwork consisting of EGFR-associated proteins from the network of Figure 5 A ( Figure 5 B ). The aldo-keto reductase family 1 member B1 (AKR1B1) gene encodes a member of the aldo/keto reductase superfamily, and this gene catalyzes the reduction of a number of aldehydes 22 . Recently, AKR1B1 was reported to promote drug resistance to EGFR TKIs in lung cancer cell lines 23 . The current study also found that AKR1B1 correlates with EPS15 and EGFR ( Figure 5 B ). In addition, we observed that in the TCGA database, the expression of AKR1B1 in LIHC was higher than that in normal tissue (p < 0.05) ( Figure 5 C ). The patients with high AKR1B1 expression had a lower survival rate than the patients with low AKR1B1 expression (log-rank p < 0.05) ( Figure 5 D ). Then, we further found that overexpression of EPS15 in LIHC increased the expression of AKR1B1 by using western blotting analysis, whereas the expression of AKR1B1 was reduced in the OE_EPS15-AS1 group ( Figure 5 E ). Therefore, we conclude that EPS15 can promote AKR1B1 expression in LIHC. To further clarify whether EPS15 promotes LIHC development through AKR1B1, we constructed OE_EPS15-AS1 HepG2 cell lines and OE_EPS15-AS1 + OE_AKR1B1 HepG2 cell lines in LIHC ( Figure 6 A ). Although EPS15-AS1 inhibited cell migration in wound healing assays, overexpression of AKR1B1 reversed the inhibitory effect of EPS15-AS1 ( Figure 6 B ). In the Fe 2+ detection assay, overexpression of AKR1B1 significantly reduced the elevated Fe 2+ caused by overexpression of EPS15-AS1 ( Figure 6 C ). Detection of lipid peroxidation and mitochondrial membrane potential also confirmed that EPS15-AS1 enhanced lipid peroxidation and disrupted mitochondrial membrane potential, and overexpression of AKR1B1 significantly inhibited this damage ( Figure 6 D and 6E ). PI staining further demonstrated that AKR1B1 reduced the ratio of dead cells in the OE_EPS15-AS1 + OE_AKR1B1 group compared with the OE_EPS15-AS1 group ( Figure 6 F ). In addition, Zhang et al. reported that AKR1B1 promotes glutathione (GSH) de novo synthesis to protects against oxidative damage, and glutathione peroxidase 4 (GPX4) is able to utilize GSH to reduce peroxidized lipids to non-toxic lipids, thereby protecting cells from ferroptosis 22 . Therefore, the intracellular GSH was also detected. The results showed that intracellular GSH decreased after EPS-AS1 overexpression, increased in OE_AKR1B1 group, and the inhibitory effect of EPS-AS1 on GSH was attenuated in OE_EPS15-AS1 + OE_AKR1B1 group ( Figure S2 ). These results suggested that AKR1B1 can promote LIHC progression and that EPS15-AS1 increases the susceptibility of LIHC cells to ferroptosis by inhibiting the transcription of EPS15 and AKR1B1.
Discussion LIHC is one of the most common malignant tumors, but metastasis and postoperative recurrence seriously affect the long-term prognosis 3 . In addition, resistance to chemotherapeutic agents is an important reason for the low efficacy of radiotherapy and chemotherapy in hepatocellular carcinoma patients 24 . Therefore, an increasing number of researchers believe that combination gene therapy may be a potential direction for the treatment of LIHC 25 . Approximately 90% of genes in eukaryotic genomes are transcribed, with only 1-2% of transcribed genes coding for proteins, while most other genes are transcribed as noncoding RNAs 26 , 27 . Noncoding RNAs play an important role at the transcriptional and posttranscriptional levels of encoded genes 27 . In the current study, we found that EPS15 was closely associated with the progression of LIHC by analyzing the TCGA database. With further analysis, we found that the expression level of EPS15-AS1 was reduced in LIHC cells. Liu et al. also found that EPS15-AS1 was expressed at low levels in liver cancer cells, and overexpression of EPS15-AS1 reduced EPS15 expression and promoted apoptosis of liver cancer cells 17 . The current study showed that EPS15 was increased in LIHC cell lines, including HepG2 and Huh7, compared with the normal hepatocyte cell line HL7702. We also demonstrated that overexpression of EPS15-AS1 inhibited EPS15 expression and weakened the invasiveness of hepatocellular carcinoma cell lines. However, we found that overexpression of EPS15-AS1 induced ferroptosis but not apoptosis in LIHC cells. This difference in conclusions may be due to the different experimental methods used to detect cell death between the two studies. Annexin V-FITC/PI was initially invented to detect the process of apoptosis 28 . The mechanism of this assay is as follows: in living cells, phosphatidylserine (PS) is located on the inner side of the cell membrane, but in early apoptotic cells, the PS flips from the inner side of the cell membrane to the surface of the cell membrane. Annexin-V, a Ca 2+ -dependent PS-binding protein, can bind to the cell membrane during the early stage of apoptosis by binding to the PS exposed outside of cells. In the late stage of apoptosis, the cell membrane is severely damaged, and Annexin-V can freely pass through the cell membrane 28 . In addition, propidium iodide (PI) was used to distinguish surviving cells from necrotic or late-stage apoptotic cells. PI is a nucleic acid dye that does not pass through the intact cell membranes of normal or early apoptotic cells but can pass through the cell membranes of late apoptotic and necrotic cells and stain the cell nucleus 28 . Therefore, PI is excluded from living cells (Annexin V-/PI-) and early apoptotic cells (Annexin V+/PI-), while late apoptotic and necrotic cells are stained double-positive (Annexin V+/PI+). Interestingly, during ferroptosis, cell membranes are subjected to uncontrolled lipid peroxidation, ultimately causing cell membrane disruption. Finally, cells undergoing ferroptosis were stained double-positive (Annexin V+/PI+). Thus, Annexin V-FITC/PI cannot distinguish apoptosis and ferroptosis, and other experiments are needed for additional validation. In the current study, we further examined intracellular Fe 2+ , lipid peroxidation, and mitochondrial membrane potential to determine what kind of cell death is involved. After overexpression of EPS15-AS1, intracellular Fe 2+ and lipid peroxidation were enhanced, and mitochondrial membrane potential was disrupted. Moreover, co-overexpression of EPS15-AS1 and EPS15 attenuated the damaging effects of EPS15-AS1. With bioinformatic analysis, we further found that AKR1B1, which can influence ferroptosis, was associated with EPS15. AKR1B1 was overexpressed in LIHC cells, and overexpression of EPS15-AS1 inhibited AKR1B1 expression. Moreover, overexpression of EPS15-AS1 and AKR1B1 in HepG2 cells showed similar invasiveness to normal HepG2 cells and had normal levels of Fe 2+ , lipid peroxidation, and mitochondrial membrane potential. This confirmed that AKR1B1 can promote LIHC cell activity against ferroptosis. In addition, Zhang et al. also reported that AKR1B1 has the ability to promote resistance to EGFR-targeted therapy in lung cancer by enhancing glutathione de novo synthesis 23 . However, the current study had some limitations as well. The mechanism by which EPS15 promotes AKR1B1 is still unclear. Furthermore, whether AKR1B1 also promotes LIHC cell activity by facilitating the glutathione de novo synthesis is unknown. Therefore, future studies should further clarify the exact mechanisms of EPS15 and AKR1B1 promoting hepatocellular carcinoma.
Conclusion In conclusion, the current study showed that EPS15-AS1 expression had an inhibitory effect on hepatocellular carcinoma. Further investigation demonstrated that EPS15-AS1 reduced EPS15 expression and thus downregulated AKR1B1 expression, which finally inhibited the invasiveness of LIHC cells and induced ferroptosis in LIHC. In general, EPS15-AS1 may be a candidate target for hepatocellular carcinoma and may be a therapeutic strategy to overcome drug resistance.
Competing Interests: The authors have declared that no competing interest exists. Epidermal growth factor receptor substrate 15 (EPS15) is part of the EGFR pathway and has been implicated in various tumorigenesis. Increasing evidence suggests that long noncoding RNA (lncRNA) plays an essential role in liver hepatocellular carcinoma (LIHC) by regulating the expression of proteins and genes. Through analysis of the cancer genome atlas (TCGA) database, we found that EPS15 is highly expressed in LIHC tissue, and lncRNA EPS15-antisense1 (EPS15-AS1) decreased in LIHC cell lines. However, the function of EPS15-AS1 in LIHC is still unknown. When EPS15-AS1 was overexpressed in HepG2 cell lines, the expression of EPS15 was reduced and cell activity and invasiveness were inhibited. In addition, we observed an increase in Fe 2+ ion and lipid peroxidation after overexpression of EPS15-AS1, and further analysis showed that the susceptibility to ferroptosis increased. Aldo-keto reductase family 1 member B 1 (AKR1B1) belongs to the aldo/keto reductase superfamily and is involved in maintaining the cellular redox balance. Survival analysis revealed that patients with a higher level of AKR1B1 have a lower survival rate in the TCGA database. We also found that EPS15 enhanced the AKR1B1 expression in LIHC, and AKR1B1 had the ability to promote cell invasiveness. Moreover, overexpression of AKR1B1 alleviated the promoting effect of EPS15-AS1 on ferroptosis. Therefore, EPS15-AS1 can induce ferroptosis in hepatocellular carcinoma cells by inhibiting the expression of EPS15 and AKR1B1 and disrupting the redox balance. EPS15 and AKR1B1 may serve as biomarkers for diagnosis and lncRNA EPS15-AS1 potential drug for LIHC.
Supplementary Material
Funding This work is supported by the Health Science and Technology Program of Inner Mongolia Autonomous Region [202201570]. Data Availability The data used in this study are available from the corresponding author upon reasonable request. Abbreviations epidermal growth factor receptor substrate 15 long noncoding RNA liver hepatocellular carcinoma the cancer genome atlas lncRNA EPS15-antisense1 Aldo-keto reductase family 1 member B 1 receptor tyrosine kinases epidermal growth factor receptor Dulbecco's modified Eagle's medium fetal bovine serum Real-Time Quantitative PCR Analysis
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1030-1040
oa_package/1d/e5/PMC10788721.tar.gz
PMC10788722
0
Introduction Lung cancer is one of the common diseases of respiratory system, early screening and detection is the key to improve the overall prognosis of lung cancer. 1 Solid lung lesions are one of the common imaging manifestations of respiratory diseases. With the popularization of chest computer tomography (CT) technology, more and more lung lesions have been found, and the diagnosis of benign and malignant lesions is still based on pathological diagnosis. 2 , 3 Conventional bronchoscopy is difficult to find the peripheral pulmonary lesions (PPLs), and can do nothing for peripheral lung cancer. 4 Clinical development of early diagnosis of peripheral lung cancer is urgently needed. With the rapid development of interventional techniques for lung cancer, endobronchial ultrasound-guided transbronchial lung biopsy/cryobiopsy with a guided sheath (EBUS-GS-TBLB or EBUS-TBLC) has gradually matured. 5 The small ultrasound probe can enter the airway through the bronchoscope biopsy hole to perform a 360° cross-sectional scan of the lesion site, display the ultrasound image of the lesion and surrounding tissues, and explore the PPLs that cannot be observed by ordinary bronchoscope, which improves the positive rate of diagnosis of lung cancer in Chinese patients with low complication rate. 6 - 9 Studies shown that cryobiopsy has been used to diagnose interstitial lung disease, lung cancer, PPLs, and as a post-transplant test. 10 , 11 The samples obtained by cryobiopsy meet the histopathological requirements, and have the advantages of large specimen, less false error, more alveolar tissue and high diagnostic rate. 12 - 15 Cryobiopsy is considered the first option for the diagnosis of benign lesions. However, the diagnostic rate is still largely dependent on the location of the lesion and the probe. Therefore, a new method was designed to improve the diagnostic rate of the technique by direct penetration of the probe into the lesion. Our study aimed to evaluate the diagnosis of PPLs with blind-ending type (Type I) and pass-through type procedures (Type II) of EBUS-GS-TBLB or EBUS-TBLC. In addition, we would also evaluate and optimize the technique in combination with information on complications, clinical data and pathological diagnosis.
Methods Study design This study was conducted in the department of respiratory and critical care medicine, The Huai'an Clinical College of Xuzhou Medical University. The respiratory medicine unit performs ∼3000 respiratory endoscopies per year, including advanced diagnostic and therapeutic bronchoscopies. Patients with endobronchial lesions biopsied during initial airway examination, as well as patients with incomplete clinical data or information were excluded. 126 Patients with PPLs from Nov. 1. 2020 to Oct. 31. 2022 were enrolled. All patients were diagnosed by chest CT or PET-CT. There were 60 males and 66 females aged from 28 to 80 years. 62 patients had bronchial (diameter ≥1.4 mm, measured from CT images) penetration in the lesion, while 64 cases without bronchial penetration. For patients' lesions with eligible bronchial, we performed pass-through type (Type II) EBUS-GS-TBLB or EBUS-TBLC; for patients' lesions without eligible bronchial, we performed blind-ending type (Type I) EBUS-GS-TBLB or EBUS-TBLC. Inclusion criteria Patients with PPLs diagnosed by chest CT or PET-CT; No history of extra-pulmonary malignant tumor; Patients willing to cooperate. Exclusion criteria Lesions located in superior lobe of right lung and left superior division bronchus (for the cryoprobe can't reach this area); Severe pulmonary infection with high fever, cardiopulmonary function is extremely poor; Bronchial asthma attack period or active large hemoptysis patients; Abnormal function of heart, lung, brain, kidney and other organs. EBUS-GS-TBLB and EBUS-TBLC procedure EBUS-GS-TBLB EBUS-GS was performed using the standard techniques as previously reported . 16 A representative case of EBUS-GS in a patient with PPLs is shown in Fig. 1 . Briefly, using a thin-section chest CT scan for guidance, a thin bronchoscope (BF-P260F; Olympus, Tokyo, Japan) was advanced as close as possible to the target peripheral lesion under general anesthesia with laryngeal mask airway (LMA). Then, a 20MHz radial EBUS probe (UM-S20-17S; Olympus), covered with a GS (K-201; Olympus) was introduced through the working channel of the bronchoscope to precisely locate the target lung lesion. According to previous studies, 16 - 19 radial probe EBUS findings of the target peripheral lesion were classified as within, adjacent to, or outside of the lesion (Fig. 2 ). After identifying the target lesion on the radial probe EBUS, the guide sheath was locked in place and R-EBUS probe remove, then subsequent forceps biopsy and brush cytology were performed. Five to eight biopsies were taken during each round of the procedure. 5 Biopsied specimens were fixed in formalin solution and sent to pathology lab immediately for processing and analysis. EBUS-TBLC Patients were performed transbronchial lung cryobiopsy (TBLC) under LMA (Well lead Medical Co., Ltd, Guangzhou, China). We performed TBLC with a flexible bronchoscope (5.9 mm distal end diameter, 2.8 mm working channel diameter; EVIS BF-1T260, Olympus, Tokyo, Japan) and a 1.9 mm cryoprobe (ERBECRYO 2; Erbe Elektromedizin GmbH, Tubigen, Germany). A 1.4 mm 20-MHz radical probe (UM-S20-17S; Olympus, Tokyo, Japan) was used to identify a target legion in the peripheral pulmonary region and measured the depth of the lesions at the same time. The dilation balloon (BDC-10/55-7/18; Micro-Tech (Nanjing) Co., Ltd, Nanjing, China) was routinely used to achieve hemostasis. A disposable biopsy forceps was used to clamp the front end of the dilation balloon from the apex of the bronchoscope, and then the balloon was placed in the segmental or subsegmental bronchus. After identifying the target lesion, the cryoprobe was inserted via the working channel of the bronchoscope and was placed at the desired location under direct visualization on bronchoscopy. The lesion was frozen for 4-6 seconds using a cryoprobe. In order to alleviate the damage of the cryoprobe to the mucosa and vocal cords, The bronchoscope was then immediately removed with the cryoprobe along with the tissues, accompanied by the release of footswitch. The dilation balloon was then inflated for 2 minutes immediately after removing the bronchoscope. After the bronchoscope was reinserted to assess for hemostasis, TBLC was repeated for 3-5 times until a sample with adequate volume was obtained. The tissue samples were immediately fixed in 10% neutral-buffered formalin. Postoperative management All patients were admitted to the resuscitation room postoperatively and underwent chest X-ray or CT scan exams within 3 hours, and those who received cryobiopsy were carefully examined airway mucosa, vocal cord structure, and range of motion following the procedure to evaluate for the cryobiopsy-related complication. Statistical analysis We included all eligible patients from the study opening to closing dates in our analyses. Statistical results were analyzed in a double-blind manner with patient clinical treatment information. For variables assumed to be normally distributed, data are expressed as mean ± SD, whereas for variables non-normally distributed, data are expressed as median. Categorical data are expressed in absolute numbers and percentages. Statistical software SPSS 23 was used in this study. The significance of distribution differences between groups was estimated by χ 2 test or Fisher's exact test, and all statistical tests were two-sided probability tests. The independent sample t test was used to compare two baseline data groups. Using P < 0.05 was considered statistically significant.
Results Clinical baseline characteristics A total of 136 cases were performed by EBUS-GS-TBLB or EBUS-TBLC procedures. 10 cases were excluded: pathological information was missing in 3 cases, localization failed in 7 cases. Total 126 cases were included for analysis. Representative samples cases were presented in Figure 1 . Among them, 66 (52.4%) were performed Type I and 60 (47.6%) were performed Type II. 22 (17.5%) cases had lesions in the middle/ lingular lobe, and 104 (82.5%) cases in the lower lobe. The mean diameter of the lesion was 28.21 mm. 62 (49.2%) cases of the lesions were concentric with the probe, while 64 (50.8%) cases of the lesions were eccentric with the probe. In concentric group, 36 (58%) were performed Type I, and 26 (42%) were performed Type II; In eccentric group, 30 (46.9%) were performed Type I, and 34 (53.1%) were performed Type II. In forceps biopsy, 34 (53.1%) cases were performed Type I, and 30 (46.9%) cases were performed Type II. In cryobiopsy, 32 (51.6%) cases were performed Type I, and 30 (48.4%) cases were performed Type II. Clinical baseline characteristics did not differ between the forceps biopsy and cryobiopsy groups (Table 1 ). Procedure characteristics Type I and Type II procedures were randomly selected forceps biopsy and cryobiopsy to obtain local tissue specimen. The median procedure time for all enrolled patients was 37.2 min (20-50 min). The duration time of Type II procedure was slightly shorter than the Type I, but there was no statistical significance ( P =0.286). Details of procedures are listed in Table 1 . Comparison of accuracy of pathological diagnosis between different procedures The overall diagnosis rate of 126 patients with EBUS-GS-TBLB OR EBUS-TBLC was 73% (92/126). In terms of the influence of factors on the diagnosis yield, we found that the diagnostic yield in Type II (46/60, 76.7%) higher than in Type I (46/66, 69.7%), and different method type have significant influence on the diagnostic yield ( P = 0.012, x 2 = 4.699) (Table 2 ). The study compared the outcomes of different procedures in forceps biopsy and cryobiopsy. Diagnostic yields for Type I with forceps biopsy (n=34), Type I with cryobiopsy (n=32), Type II with forceps biopsy (n=30), and Type II with cryobiopsy (n=30) were 72.5%, 64.5%, 70.4% and 74.2% respectively (Figure 2 A). The study further compared the outcomes of different procedures in concentric and eccentric lesion. Diagnostic yields for Type I with eccentric (n=30), Type I with concentric (n=36), Type II with eccentric (n=34), and Type II with concentric (n=26) were 60%, 77.8%, 67.6% and 88.5%, respectively ( P < 0.05) (Figure 2 B). Comparison of pathological diagnosis between different procedures For Type I with eccentric group (n=30), among which 6 cases were diagnosed as malignant (3 adenocarcinoma, 2 squamous cell carcinoma, and 1 metastatic carcinoma), 6 cases were undiagnosed as malignant but these cases were diagnosed as malignant by surgical operation or CT-guided lung puncture biopsy, 12 cases were diagnosed as benign, and 5 cases were not definitive diagnosed as benign disease. For Type I with concentric group (n=36), pathological diagnosis was obtained in 28 cases, among which 19 cases were diagnosed as malignant (10 adenocarcinoma, 5 squamous cell carcinoma, 1 small cell carcinoma, 1 unclassified carcinoma, and 1 metastatic carcinoma), 4 case was undiagnosed as malignant but diagnosed as malignant by surgical operation or CT-guided lung puncture biopsy, 9 cases were diagnosed as benign, and 3 cases were not diagnosed as benign. For Type II with eccentric group (n=34), pathological diagnosis was obtained in 23 cases, among which 14 cases were diagnosed as malignant (8 adenocarcinoma, 2 squamous cell carcinoma, 3 unclassified carcinoma, and 1 metastatic carcinoma), 5 cases were undiagnosed as malignant but diagnosed as malignant by surgical operation or CT-guided lung puncture biopsy, 9 cases were diagnosed as benign, and 5 case was not diagnosed as benign. For Type II with concentric group (n=26), pathological diagnosis was obtained in 23 cases, among which 17 cases were diagnosed as malignant (8 adenocarcinoma, 4 squamous cell carcinoma, 1 small cell carcinoma, 1 mucinous carcinoma, and 3 unclassified carcinoma), 0 cases were undiagnosed as malignant (cancerous tissue was visible but tumor type was not clear), 6 cases were diagnosed as benign, and 3 cases were not definitive diagnosed as benign. The details of the histological findings are displayed in Table 3 . Comparison of complication rate in different groups A total of 126 patients underwent EBUS-GS-TBLB or EBUS-TBLC and were well tolerated. 1 case showed small amount of bleeding at the puncture site in Type I group, 1 case in Type II group. All patients had hemostasis after suction with local 4°C saline or adrenaline instillation. Only one hypoxemia occurred in each Type I group and Type II group. All patients were given elevated oxygen concentration and returned to the ward after the operation was stopped. There was no statistically significant difference between the two groups. Moreover; there was no statistically significant difference between the two groups regarding adverse effects like hemorrhage. Especially, no patient who performed TBLC had damage of mucosal or vocal cord.
Discussion Recent years, the equipment and technology of endobronchial ultrasound have been continuously improved, and the diagnostic rate of transbronchial biopsy has been continuously improved. According to reported, the positive rate of EBUS-GS-TBLB or EBUS-TBLC for PPL was 58.82%-79.29%. 5 With this technique, the bronchoscope was sent to the distal bronchus of the segment where the concentrated lesions were displayed by chest CT, and the radial probe was inserted into the guide sheath and sent along the biopsy passage for ultrasonic exploration . 20 , 21 After finding the best image of lesions, the guide sheath was fixed, and the probe was extracted for biopsy. The fixation of the guide sheath at the lesion facilitates multiple sampling, increases diagnostic rates and reduces the risk of bleeding. 22 , 23 EBUS-GS-TBLB or EBUS-TBLC technology solves the technical problem that it is difficult for conventional bronchoscopy to reach small peripheral airways, and the ultrasonic probe improves the positive rate of PPLs detected by bronchoscopy. Literature shows that the diagnostic accuracy of EBUS-GS-TBLB for PPLs is between 68.9%-87.5%, 24 while our diagnostic yield reached to 73% (92/126) with a low complication rate of 2.6% through EBUS-GS-TBLB or EBUS-TBLC. In the study, we divided all the enrolled cases into Type I and Type II, and the analysis found the diagnostic yield in Type II (46/60, 76.7%) higher than in Type I (46/66, 69.7%), and different method type have significant influence on the diagnostic yield ( P = 0.012, x 2 = 4.699). In 2008, Dr. Hetzel first proposed the probability of cryogenic biopsy, and reported the diagnosis of 12 cases of endobronchial tumor by cryogenic technique for the first time. 10 It was found that the soft bronchoscopy specimen collected at low temperature not only maintained high histological integrity, but also preserved its internal molecular markers. Cryobiopsy is a biopsy method in which ice crystals adhere to the lung tissue by applying refrigerant to the tip of the frozen probe, and the adhered lung tissue is removed through the bronchus. In this study, our enrolled cases included 64 cases forceps biopsy and 62 cases cryobiopsy, and the diagnostic yield for the two groups were 70.3% and 75.6%, and it was not statistically significant. We speculate that it might be due to the small number of cryobiopsy cases, and we will further increase the number of cases in the future to further observe the influence of biopsy method on the pathological diagnosis of different method types of patients. Kho et al. demonstrated the orientation remained an important factor affecting diagnostic yield, and cryobiopsy indeed significantly increased the diagnostic yield of eccentrically and adjacently orientated lesions. 5 For further analyze the influence of different method type on the diagnosis yield, we divided 126 cases into four groups, including Type I with eccentric group (n=30), Type I with concentric group (n=36), Type II eccentric group (n=34), Type II with concentric group (n=26). It showed that Type I with eccentric (60%) had the lowest diagnosis yield, and this is mainly due to the location of the lesion. Our study suggests that Type II procedure has higher diagnostic yield and different subtype (concentric or eccentric) have significant influence on the diagnostic yield, Type II with concentric has a higher diagnosis rate than eccentric. But different biopsy methods (forceps biopsy or cryobiopsy) did not determine the final diagnosis. In addition, Type I with eccentric had the lowest diagnosis yield. Interestingly, we found that forceps biopsy is more accuracy than cryobiopsy in Type I. Multi-center randomized controlled trials are needed to further verify the results of this study. Finally, additional navigation methods, such as robotic bronchoscopy or cone beam-CT certainly can enhance the diagnostic result.
Competing Interests: The authors have declared that no competing interest exists. Background and objective: Recently, endobronchial ultrasonography with guide sheath-guided (EBUS-GS) has been increasingly used in the diagnosis of peripheral pulmonary lesions (PPLs) from human natural orifice. However, the diagnostic rate is still largely dependent on the location of the lesion and the probe. Here, we reported a new procedure to improve the diagnostic rate of EBUS-transbronchial lung cryobiopsy (EBUS-TBLC), which performed under general anesthesia with laryngeal mask airway (LMA) in all of the patients. This study retrospectively evaluated the diagnosis of PPLs with 'blind-ending' type (Type I) and 'pass-through' type procedures (Type II) of EBUS-GS-TBLB or EBUS-TBLC respectively. Methods: Retrospective review of 136 cases performed by EBUS-GS-TBLB or EBUS-TBLC for PPLs over 2 years. Results: A total of 126 cases EBUS-GS-TBLB or EBUS-TBLC were performed during the study period. Among them, 66 (52.4%) were performed Type I and 60 (47.6%) were performed Type II. Clinical baseline characteristics did not differ between two groups. The overall diagnosis rate of 126 patients with EBUS-GS-TBLB or EBUS-TBLC was 73% (92/126), and different method type have significant influence on the diagnostic yield ( P = 0.012, x 2 = 4.699). Among them, diagnostic yields for Type I with forceps biopsy (n=34), Type I with cryobiopsy (n=32), Type II with forceps biopsy (n=30), and Type II with cryobiopsy (n=30) were 72.5%, 64.5%, 70.4% and 74.2% respectively (Figure 2A). The study further compared the outcomes of different procedures in concentric and eccentric lesion. Diagnostic yields for Type I with eccentric (n=30), Type I with concentric (n=36), Type II with eccentric (n=34), and Type II with concentric (n=26) were 58.2%, 76.9%, 60.2% and 74.8%, respectively ( P < 0.05). The incidence of complications in 126 patients was 2.6%. Conclusion: EBUS-GS-TBLB and EBUS-TBLC both are very safe and highly diagnostic technique; different method types have significant influence on the diagnostic yield. Moreover, Type II procedure has higher diagnostic yield. In addition, Type I with eccentric had the lowest diagnosis yield.
This work was supported by grants from Huai'an Natural Science Research Project (HAB 201928) and Huai 'an Key Laboratory of Immunology (HAP2020). Ethics approval and consent to participate All sample collection was approved by the Human Research Ethic Committee of The Huai'an Clinical College of Xuzhou Medical University (YX-2021-087-01). Abbreviations chest computer tomography endobronchial ultrasound guided sheath peripheral pulmonary lesions transbronchial lung biopsy/cryobiopsy
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):908-915
oa_package/eb/2f/PMC10788722.tar.gz
PMC10788723
0
Introduction Lung cancer is usually diagnosed at advanced inoperable stage. A very important reason is the lack of early disease symptoms and lack of blood tests as we have in other cancer types such as breast cancer, prostate cancer and gastrointestinal cancer. Very few patients will present hemoptysis in early stage disease due to endobronchial disease and will search for medical evaluation. Moreover; most patients since they are smokers will attribute their progressive dyspnea to their smoking habit and chronic obstructive disease (COPD) and will seek for medical attention when their clinical status is very severe. There are also cases where pulmonary nodules are observed during re-staging for other cancers types such as; breast cancer, gastrointestinal or prostate cancer. In the past 10 years there was a bloom regarding navigation technologies for single pulmonary nodules. Radial-endobronchial ultrasound has been used with or without C-ARM fluoroscopy for guidance assistance 1 . Electromagnetic technologies with Medronic and virtual bronchoscopy navigation with Archimedes are also in the market in the past 10 years with minor differences between them 2 - 4 . C-Arm fluoroscopy can be also used when performing electromagnetic guidance. CBCT can be used with another realtime method 5 , 6 . Rapid on site evaluation (ROSE) is a method where cytology samples obtained from the biopsy procedure can identify whether there is cancer in the nodule and whether the sample is enough for further investigation for immunohistochemistry and gene expression 7 . Robotic bronchoscopy has been introduced in 2018 with Ion and Monarch platforms 8 , 9 . Ablation as a local treatment has been introduced in the past 20 years and different generators and methods are used such as the radiofrequency, microwave, thermosphere and cryo. Percutaneous and surgical probes are also available based on the location of the lesion 10 - 13 . The usual side effects are pneumothorax, hemoptysis or even hempthorax. A recent meta-analysis presented data where cryo ablation is superior to radiofrequency ablation, but not microwave ablation 12 . In the past 10 years new endobronchial ablation systems have been introduced in the market such as the Bronchus which is a radiofrequency catheter guided by Archimedes electromagnetic navigation systems and NEUWAVE TM FLEX Microwave Ablation System guided by the MONARCH® Platform by Ethicon. There are certain advantages for each methodology that will be discussed in the section that follow.
Materials and Methods We made a thorough search of the literature on pub med for new publications using strictly the following key words: radiofrequency ablation systems for lung cancer, microwave ablation systems for lung cancer, cryo ablation systems for lung cancer, thermosphere ablation systems for lung cancer, endobronchial ablation systems for lung cancer. We identified 35 publications, including case reports and written in all languages and used the data from 28. We focused our manuscript strictly on these data presenting up to date data. Percutaneous ablation systems Radiofrequency ablation with percutaneous probes for lung cancer has been well established since 2000 and there are numerous studies with the effectiveness of this technique 14 . The next equipment to be validated for lung cancer was microwave generators 11 . There are differences between the radiofrequency and the microwave effect, and methodology. Microwave technology was observed to be more efficient than radiofrequency 15 . Currently the thermal effect of microwave application has been enhanced by adding transbronchial thermal gel 16 . Moreover; thermosphere technology combined with microwave ablation has increased the efficiency of the method 13 . Cryo ablation is the latest technology on the field of local percutaneous treatment under computed tomography guidance 17 . The effect of the cryo ablation is superior to radiofrequency, however; it is not superior to microwave ablation 12 . Navigation for endobronchial ablation systems The main issue with single pulmonary nodules is the diagnosis. There are 6 distinct reasons to go early for small malignant nodules: a) The lower the T the better the survival (weak relation) acc. to TNM V8 and the lower the probability of N1 and yet the probability of local relapse, b) The smaller the nodule even in the same T-descriptor (e.g. Stage I) the better the long term survival, c) The smaller the nodule the lesser the intratumoral heterogeneity and the mutational probability which is an independent risk factor of relapse, d) No / less reduction of postinterventional lung function loss after minimal invasive approach should pave the way to more options in possible return of cancer and e) The smaller the nodule the better is add-on surgery / SBRT / transthoracic ablation (The KISS) or drug treatment like EPR, ITC and TBNI if not possible ot treat endobronchially completely. We currently use radial-EBUS with or without C-ARM fluoroscopy or in combination with other navigation systems 1 . The main biopsy tools are cytology brush, mini forceps, fine needle aspiration (22G) and thin cryo probes (Figure 2 ). We have even the capability of making tunnels through the lung parenchyma with needles and balloon dilation and obtain sample 18 , 19 . This method is safe and effective when navigation systems are used such as the Archimedes Virtual Bronchoscopy Navigation (VBN) (Figure 3 ), Illumisite TM platform from Medronic 20 . Another novel navigation technique is the robotic assisted navigation with the Monarch ® Platform (Auris Health, Inc., Redwood City, CA) 8 . Other combination techniques and equipment is currently available that can be used with radial-ebus such as the Cios Spin ® by Siemens Healthineers 21 , O-arm TM O2 imaging system by Medtronic and Super Dimension TM (Covidien, Plymouth, MN, USA) 22 , 23 . Cone-beam CT apparatus (ARTIS zeego; Siemens Healthcare GmbH, Erlangen, Germany) is another equipment-technique that can be also used 24 (Figure 4 ,5). Several studies have been made with all these equipment, however; higher diagnostic efficiency was observed for those studies with nodules ≥30mm. A recent paper has shown in a meta-analysis that CBCT is as stand-alone as well in combination with other technologies superior to all other technologies stand-alone or in combination. It is as well cost-effective in regards to QUALYS in the Dutch Healthcare System versus transthoracical approach 25 , 26 . Rapid on-site evaluation was used to increase the diagnostic yield in some studies 27 , or confocal laser endomicroscopy 28 (Figure 6 ,7). Endobronchial ablation systems Radiofrequency ablation systems are already in the market by Brochus and there are several studies 29 , 30 . A microwave catheter (Emprint TM ablation catheter with the ThermosphereTM technology, Covidien, Plymouth, MN, USA) is already on the market. Flexible water-cooled MWA antenna (Vison-China Medical Devices R&D Center) connected to a microwave platform (Surblate, Vison) and an MWA device for ablation by Nanjing Nisionmedic (Nanjing, China) 31 - 34 . The first-in-the-world ENB microwave ablation using the Illumisite TM fluoroscopic navigation platform was successfully performed in mid-2022 35 . New methods for ablation Cryo ablation probes are available for percutaneous application but, not for transbronchial application, although one could use in certain cases the available probes from ERBE II system 36 . Robotic assisted guided cryobiopsy has been presented in a previous study and it could be the platform for future application of a cryoprobe system 37 . There have been pilot studies evaluating in vitro a novel transbronchial cryo probe 38 , 39 . However; we still need human studies. Bronchoscopic thermal vapor ablation (BTVA) by Bronchus is available since 2021 for emphysema treatment, recently a study demonstrated efficiency when this technique is used as an ablation tool 40 , 41 . Further studies are required to explore and improve on it before it can be reliably used for cancer treatment. Until now microwave ablation systems are available for percutaneous usage, however; there are systems being tested for transbronchial treatment in animal models 42 . Pulsed electric field (PEF) is a non-thermal ablative modality that uses a short-living strong electrical field created around a catheter to create microscopic pores in cell membranes (electroporation). Finally, a device with radiofrequency ablation catheter has been investigated which is compatible with endobronchial ultrasound 43 . The use of rapid on-site evaluation (ROSE) Due to the increasing rate of computed tomography scans based on the new screening guidelines for lung cancer, more and more patients are diagnosed with pulmonary nodules. It is absolutely necessary to identify whether we have or not malignancy. We can use positron emission tomography scan (PET-CT) as an initial examination, however, there is still a diagnostic issue correlated with the size of the nodules. In the case of nodules ≤1.2cm and with a low metabolic rate ki-67≤10% the technique cannot provide accurate data even with a second delayed reexamination after 30minutes 44 . Therefore biopsy is absolutely necessary in the case of a PET-CT examination with low metabolic rate ≤3SUV. Novel diagnostic techniques can provide adequate navigation and diagnostic yield for pulmonary nodules ≤1.2cm. However; the sample will mostly be cytologic, since a cytology brush or needle will be used. We can use of course where possible biopsy forceps or small cryo-probes 1.1mm in order to obtain tissue. Again the sample will be small is quantity, but not in quality. We can verify the malignancy with rapid on-site evaluation and the quality of the sample (whether we can perform additional immunohistochemical examinations). After biopsy we need approximately 2-5 minutes to prepare the sample for evaluation in the microscope. We perform evaluation of at least two samples from the same site 45 .
Discussion Currently we propose a screening methodology to smokers, ex-smokers or people of high risk for lung cancer 46 . Therefore, we have more patients diagnosed with pulmonary nodules. In the past ten years we have had both a bloom in novel diagnostic techniques for pulmonary nodules and treatment for advanced stage lung cancer. The radial-ebus along with fluoroscopic techniques such as C-ARM, DYNA-CT, O-ARM and Cios-Spin have achieved increased navigation and diagnostic yield. Electromagnetic platforms such the Archemedes ® , Illumisite TM , and Veran's SPiN Thoracic Navigation System platform have increased even more our navigation and diagnostic yield in pulmonary nodules ≤30mm. We have now the capability to identify the location of vessels during our diagnostic procedure. Robotic assisted bronchoscopy has been also introduced in 2018 with Monarch and Ion platforms, each one with characteristics. Both systems have equal navigation and diagnostic results. Along to all these navigation systems we can use additional methods of rapid on-site evaluation to evaluate our samples and complete a diagnostic sequence earlier. The rapid on-site evaluation has been established more than 5 years and requires a learning curve from the bronchoscopy operator, or a cytologist can be used at the site of diagnosis. Confocal laser endomicroscopy can be also used as a rapid on-site tool, again a learning curve is required, or a cytologist on site. Since we have the ability to diagnose early stage disease, lung cancer or metastatic disease, we can use advanced systems for minimal local therapy. Local treatment for malignant pulmonary nodules either primary lung cancer or metastatic is efficient either percutaneously under computed tomography guidance with radiofrequency, microwave or cryo-ablation systems. Currently there are many studies presenting the efficiency and adverse effects of these systems and methods. The indications are also very specific and well known. There are several other treatment modalities such as stereotactic body radiotherapy or radiotherapy with or without systematic administration of drugs 47 48 . These treatments have a different application and results from local ablation, it remains for the treating physician to choose the best method of local disease control best on the clinical features of the patient. Radiotherapies are known to have adverse effects such pneumonitis or esophagitis. Moreover; application of this treatment modality is excluded when vessels are near the lesion. Local percutaneous systems have as major adverse effect pneumothorax and hemothorax. The pneumothorax if less than 1lt can be treated on site with a pneumocatheter, if more than 1lt then with a bullau insertion, however; several days of hospitilisation will be necessary. Indeed, due to severe emphysema, the application of percutaneous ablation might not be possible. Transbronchial ablation is an option now more than ever since we have efficient navigation. We have tools to minimize hemorrhage by spraying polymer dust to stop bleeding or in the case of fistulas we can use stents or even emphysema valves to block a sublobar segment. A balloon dilation system can also be applied to block the hemorrhage. The balloon blockers can stay inside a patient for up to 7 days if necessary. Moreover; we can eliminate most malignant pulmonary nodules as a `One Stop Shop`, since we have rapid on-site diagnosis available. Positron emission computed tomography (PET-CT) can provide staging for patients and possible biopsy sites. However; again for small pulmonary nodules ≤1.2cm again for those lesions with low metabolic rate ki-67 ≤10% we need again biopsy. In the case where a pulmonary nodule is more than ≥3cm then adjuvant systematic therapy should be used. In specific for these patients with non-small cell lung cancer (NSCLC) we have to identify the subtype adenocarcinoma, squamous cell carcinoma or where this is not possible non-other specific (NOS). First line targeted treatment with tyrosine kinase inhibitors (TKIs) are based on the expression of epidermal growth factor receptor (EGFR), anaplastic lymphoma kinase (ALK) and proto-oncogene 1 (ROS-1) 49 , 50 . Regarding EGFR positive patients, mutations were observed such as T790M and therefore second line tyrosine kinase inhibitors (osimertinib) were produced, which are now administered as first line 51 . However; again resistance to osimertinib was observed, and in this case another inhibitor (of the MET pathway) capmatinib was observed to overcome this resistance by decreasing the generation of cancer-associated fibroblasts. Capmatinib suppresses the MET/akt/snail signaling pathway 52 . Moreover; v-Raf-murine-sarcoma-viral-oncogene-homolog B (BRAF V600E), neurotrophic-tyrosine-receptor-kinase (NTRK 1/2/3) gene-fusion, mesenchymal-epithelial-transition exon14 skipping (MET) and re-arranged-during-transfection (RET) rearrangement are also novel gene expressions targeted 53 . Regarding immunotherapy we investigate the programmed death-ligand 1 (PD-L1) expression in order to administer immunotherapy alone (PD-L1≥50%) or in combination with chemotherapy (PD-L1≤49%) 54 . In the recent years based on preclinical and clinical trials other mutations such Kirsten rat sarcoma (KRAS), amplification of human epidermal growth factor receptor-2 (HER2), and other genotypes of the driver genes, have been thought highly targetable 55 . We use next-generation sequencing (NGS) panel to identify gene expressions 49 . Tissue biopsies remain the gold standard for gene expression identification, however; in the case where this is not possible, we can use liquid biopsies which can detect circulating tumor DNA (ctDNA) 56 . There have been numerous studies where ablation systems have been used along with the administration of local or systematic drugs 57 - 59 . We already have sufficient data that demonstrate the synergistic effect of ablation systems with the co-administration of drugs and this will be the future direction for research in the field of local treatment 15 , 60 - 62 . Moreover; our group has evaluated a radiofrequency ablation catheter for vessels from Covidien in order to treat small malignant pulmonary nodules 63 , 64 . This catheter with minor adjustments such as the addition of a spick tip can be better inserted in pulmonary lesion. The effect of the transbronchial ablation can be identified with radial ebus before and after the ablation effect and possibly by using an additional software such as elastography. Compared to other forms of thermal energy, microwave ablation is the most promising one based on the currently available early to midterm results. We present in our final figure our proposal for transbronchial ablation candidates versus percutaneous ablation based on the site of the pulmonary nodules (Figure 8 ).
Competing Interests: The authors have declared that no competing interest exists. Single pulmonary nodules are a difficult to diagnose imagining artifact. Currently novel diagnostic tools such as Radial-EBUS with or not C-ARM flouroscopy, electromagnetic navigation systems, robotic bronchoscopy and cone beam-compuer tomography (CBCT) can assist in the optimal guidance of biopsy equipment. After diagnosis of lung cancer or metastatic disease as pulmonary nodule, then surgery or ablation methods as local treatment can be applied. The percutaneous ablation systems under computed tomography guidance with radiofrequency, microwave, cryo and thermosphere have been used for several years. In the past 10 years extensive research has been made for endobronchial ablation systems and methods. We will present and comment on the two different ablation methods and present up to date data.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):880-888
oa_package/9e/b4/PMC10788723.tar.gz
PMC10788724
0
Background Liver cancer, characterized by its high malignancy, rapid progression, and significant mortality and disability rates, poses a serious threat to human health. The latest global cancer report estimated approximately 900,000 new cases of liver cancer in 2020, resulting in approximately 830,000 deaths 1 .Liver cancer ranks second, following lung cancer, in terms of mortality rate and number of deaths, resulting in an extremely poor prognosis. Hepatocellular carcinoma (HCC) is the predominant pathological type of liver cancer, accounting for approximately 85% of cases. The overall 5-year survival rate for HCC is below 20%, further decreasing for patients with advanced disease 2 , 3 . The dilemma in the systemic treatment of HCC primarily stems from the development of drug resistance, which is rooted in the heterogeneity of tumor cells 4 - 6 . Traditional bulk sequencing techniques are limited in their ability to comprehensively study tumor heterogeneity, posing significant challenges in drug development and the detection of diagnostic markers. Single-cell RNA sequencing (scRNA-Seq), spatial transcriptomics, and single-cell proteomics enable the analysis of heterogeneity across distinct cell populations, uncovering the interplay between tumor cells and the tumor microenvironment (TME), comprising lymphatic or vascular endothelial cells, immune cells, fibroblasts, various signaling molecules, and the extracellular matrix. 7 - 10 . Cancer stem cells (CSCs) constitute a minor subset of undifferentiated cells in tumor tissue, with robust self-renewal potential and tumorigenic capabilities, crucially contributing to tumor heterogeneity 11 . Continuous clonal proliferation in CSCs confers them with robust immune evasion abilities and drug resistance, contributing to treatment failure, tumor recurrence, and metastasis in systemic therapies 12 , 13 . The influence of CSCs on tumor heterogeneity stems from their robust self-renewal capabilities and their interactions with the TME 14 , 15 , including crosstalk with stromal and immune cells. These interactions activate signaling pathways like PI3K/AKT/mTOR and TGFβ, promoting tumor progression. Previous studies have shown the feasibility and benefits of utilizing scRNA-Seq to investigate tumor heterogeneity and analyze the landscape of single-cell transcriptomes 16 . Nevertheless, further comprehensive and in-depth research on CSCs is necessary, despite the maturity of scRNA-Seq and the development of algorithms for analyzing multi-dimensional data. Hence, this study seeks to employ scRNA-Seq to analyze cell communication, cellular developmental trajectories, and metabolic activities, aiming to identify genes associated with CSCs development. Additionally, through the integration of bulk RNA-Seq data, we have successfully established a comprehensive clinical prognostic model.
Methods and Materials Quality Control and Normalization of scRNA-Seq Data The single-cell RNA sequencing (scRNA-Seq) data generated and analysed during the current study are available in the Mendeley Data ( https://data.mendeley.com/datasets/skrx2fz79n/1 ) 17 and the Gene Expression Omnibus (GEO) database ( https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE149614 ) 18 , with sequencing performed on the Illumina NovaSeq 6000 platform. Upon importing the data into R (v4.2.2) using the Seurat package (v4.3.0) 19 , data merging was conducted, and the sequencing depth was ascertained. Quality control procedures were then implemented on the merged data, taking into account the RNA molecule count per cell (nFeature), total gene expression per cell (nCount), and the percentage of mitochondrial genes (percent.mt) present within each cell. The following filtering criteria were employed: 1000 < nCount < 10000, 250 < nFeature < 4000, percent.mt < 15%. Subsequently, the data underwent normalization using SC-Transform, and batch effects were corrected by integrating samples with the harmony package (v0.1.1) 20 in R. Dimensionality Reduction and Clustering In this study, we employed dimensionality reduction techniques and clustering methods to analyze the data. Initially, we performed principal component analysis (PCA) on the quality-controlled dataset. The scree plot was used to determine the appropriate number of dimensions for PCA, which were subsequently used in the analysis. To establish the relationships between cells, we calculated the k-nearest neighbors (KNN) and constructed a shared nearest neighbor graph. The Louvain algorithm was then applied to optimize the modularity function and identify distinct cell clusters. For further exploration and visualization, we utilized the clustree package (v0.5.0) 21 in R to determine an optimal resolution range (0.5-1.2) for defining specific cell clusters. Additionally, to enhance dimensionality reduction, we employed t-distributed stochastic neighbor embedding (tSNE) on the PCA-transformed data and generated visual representations of the cell populations. Cell Annotation Cell types were annotated based on the clustering results, followed by the annotation of cell subtypes. This study employed a combined approach, using automated annotation with the SingleR package (v1.10.0) 22 in R, along with manual annotation based on literature, to annotate both cell types and subtypes. Prior to cell type annotation, differential expression genes (DEGs) analysis was conducted for each cell type using the FindMarkers function in the Seurat package. DEGs were selected based on the criteria of Log2 fold change (Log2Fc) > 0.5 and Benjamini-Hochberg-adjusted p -values < 0.05. Subsequently, automated annotation based on DEGs was performed using SingleR, and the results were manually curated for refinement. Next, Seurat objects representing different cell types were extracted based on the annotated cell types for further dimensionality reduction using PCA and tSNE. This process led to the clustering of cell subtypes. The aforementioned steps for cell type annotation were repeated for cell subtype annotation, with a greater emphasis on manual annotation. Finally, tSNE visualization was conducted on the annotated cell data, presenting the proportions of different cell types and cell subtypes, as well as the expression levels of marker genes. Enrichment Analysis and Metabolism Analysis Differential gene expression analysis was performed on each cell cluster using the FindMarkers function in the Seurat package. The Wilcoxon rank-sum test was applied with criteria of Log2 fold change (Log2Fc) > 0.5 and Benjamini-Hochberg-adjusted p -values < 0.05 to select DEGs for each cluster. A subset of DEGs was then chosen for visualization using heatmaps. Similarly, DEGs were selected for each cell type and visualized through heatmaps. Subsequently, Gene Set Enrichment Analysis (GSEA) and Gene Set Variation Analysis (GSVA) were conducted on each cell type using the fgsea package (v1.22.0) 23 and GSVA package (v1.60.0) 24 in R. Gene sets from the Gene Ontology (GO) database were employed to assess biological functions, while gene sets from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and Hallmark databases were used for signaling pathway analysis. GSVA analysis was performed specifically on the six major cell clusters, unveiling differential biological functions and signaling pathways. Furthermore, differential gene expression analysis was carried out on the Malignant Hepatocytes and CSCs subtypes within the liver cell population. DEGs were utilized for focused GSEA enrichment analysis (GO and KEGG) as well as GSVA enrichment analysis (Hallmark). Finally, metabolism analysis of the liver cell subtypes was conducted using the scMetabolism package (v2.1.0) 25 in R to evaluate the activity of metabolic pathways in both malignant and non-malignant cells. Gene sets for metabolic pathways were sourced from the KEGG and REACTOME databases. Metabolic activity scores were calculated for each cell using the VISION algorithm 26 , and selected key metabolic pathways were visualized using a bubble plot. Cell Communication Analysis We performed an analysis of cell-cell communication using the CellChat package (v1.6.1) 27 in R. Our aim was to determine the communication status of ligand-receptor pairs between cells. To achieve this, we utilized the CellChat algorithm to assess the contributions of ligand-receptor pairs, both in terms of their output and input, to various signaling pathways. By doing so, we were able to estimate the probability or strength of communication at the signaling pathway level among different cell types. To visualize the cell-cell communication networks, we relied on the probabilities or strengths of communication. Furthermore, we employed unsupervised learning through non-negative matrix factorization (NMF) 28 to identify distinctive communication patterns within cells. This approach facilitated the recognition of coordinated communication patterns, including both outgoing and incoming interactions, across multiple cell types and signaling pathway levels. Copy Number Variation Analysis In the preceding sections, we performed subtype annotation of hepatocytes through a combination of automated and manual methods. Based on this annotation, we segregated the cells into two clusters: malignant and non-malignant. To validate the accuracy of our hepatocytes subtype annotation, we employed the inferCNV package (v1.16.0) in the R programming language to conduct an analysis of copy number variation (CNV). For additional details, please refer to the following link: https://github.com/broadinstitute/inferCNV . By utilizing a Hidden Markov Model (HMM) for prediction and “ward.D2” hierarchical clustering, we generated visual representations, using the pheatmap package (v1.0.12) in R, to illustrate the CNV profiles of both malignant and non-malignant cells. These visualizations highlighted the presence of deletions, amplifications, or absence of variations, enabling a clear differentiation between malignant and non-malignant cells. Furthermore, the CNV profiles provided valuable insights into the heterogeneity among malignant cells. Subsequently, we normalized the expression levels of each cell and calculated the sum of squared normalized values, yielding the CNV scores. Cell Trajectory Analysis In the preceding sections, we conducted a re-clustering of hepatocytes following their subtype annotation, which enabled the identification of CSCs and HCC cells. To predict the cell's developmental trajectory from CSCs to HCC and identify genes associated with evolution and development, we performed pseudotime analysis using Monocle3. We employed the Monocle3 package (v1.3.1) 29 - 31 in R, utilizing the SimplePPT algorithm for trajectory learning and an iterative algorithm for semi-supervised pseudotime analysis. This approach allowed us to construct a developmental trajectory plot of cells. Furthermore, we validated the cell's developmental trajectory by conducting Monocle2 29 , 31 pseudotime analysis with the DDRTree algorithm. For genes related to development (Moran's I > 0.5), we conducted enrichment analysis using fGSEA, following the same detailed analysis process described earlier. Lastly, we calculated the Moran's I index of genes using a spatial differential gene algorithm. The Moran's I index ranges from -1 to 1, where 1 indicates a strong positive correlation, and Moran's I less than or equal to 0 indicates no correlation. Based on the Moran's I index, we selected genes highly correlated with development (Moran's I > 0.8) and visualized their expression dynamics along the pseudotime trajectory, and their expression patterns on the UMAP dimensionality reduction plot. Weighted Gene Co-expression Network Analysis We conducted a weighted gene co-expression network analysis (WGCNA) using the WGCNA package (v1.72-1) 32 , 33 in R to identify gene expression modules and explore their correlation with phenotypes. Initially, we applied a sample clustering tree algorithm to remove outlier samples, ensuring the stability of the co-expression network construction. Our objective was to establish a scale-free network by adjusting the soft threshold (power) from 1 to 30, guided by the scale-free topology fit index (>0.85) and average connectivity, employing the pickSoftThreshold function. Next, we transformed the adjacency matrix into a topological overlap matrix (TOM) to reduce spurious correlations and noise. 1-TOM was calculated as a significant biological indicator of interconnectivity within the co-expression network and as a distance metric for gene clustering. The dynamic tree cut algorithm was then utilized to identify gene modules, with each module containing a minimum of 50 genes. Moreover, we computed module eigengenes (MEs) that represent the characteristics of each module. These modules were hierarchically clustered, and those exhibiting similar patterns, identified by a cut height of 0.25, were merged and visually distinguished by different colors. Heatmaps were generated to visualize the expression profiles of MEs and to calculate module membership (MM, correlation between MEs and the expression profiles of all genes) and gene significance (GS, correlation between MEs and phenotypes). Furthermore, correlation heatmaps were created to illustrate the relationship between MEs and the expression profiles of all genes, as well as the correlation between modules and phenotypes. We identified modules associated with cell subtypes and generated heatmaps displaying the expression profiles of module-specific genes corresponding to each subtype. Construction and Validation of Clinical Prognostic Model In the preceding sections, we identified cell trajectory-related genes (Moran's I > 0.5) using Monocle3 pseudotime analysis and cell subtype-related genes (MEturquoise module and MEblue module) through WGCNA analysis. By combining these two gene sets, we identified the intersection genes that represent genes associated with the development of CSCs. The bulk RNA-Seq data mentioned in the article were obtained from TCGA-LIHC ( https://portal.gdc.cancer.gov/repository ) and GSE76427 ( https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE76427 ) 34 for the subsequent construction and validation of the prognostic model. Initially, the samples were randomly divided into training and validation cohorts. We conducted univariate Cox regression analysis on the aforementioned intersection genes to select genes associated with prognosis. Subsequently, we employed the Least Absolute Shrinkage and Selection Operator (LASSO) regression analysis with 10-fold cross-validation and multiple-factor Cox regression analysis to further refine the selection of prognostic-related genes and construct a risk scoring model. This model establishes the relationship between gene expression levels and prognosis using various coefficients and is represented by the following formula: In the formula, represents the correlation coefficient, and represents the normalized expression levels of genes associated with prognosis. Subsequently, we performed a validation of the risk scoring model to assess its accuracy. Initially, we conducted an analysis of survival status and survival time differences between high-risk and low-risk groups in the training cohort. The results were presented using scatter plots and Kaplan-Meier survival curves, which were then validated in the independent validation cohort. Next, we assessed the predictive capability of the risk scoring model for prognosis by analyzing the Area under the Curve (AUC) of Receiver Operating Characteristic (ROC) curves for 1-year, 3-year, and 5-year overall survival (OS) in both the training and validation cohorts. Lastly, by integrating clinical data with the risk scoring model, we developed Nomograms and evaluated the concordance between the predicted and actual values of 1-year, 3-year, and 5-year OS using Calibration curves. Real Time Quantitative Polymerase Chain Reaction (RT-qPCR) Validation We conducted RT-qPCR to assess the expression levels of APCS, ADH4, FTH1, and HSPB1 in HepG2 cells and HepG2-CSCs. The HepG2-CSCs were enriched from the HepG2 cell line using serum-free tumor stem cell culture medium (the formulation refer to Supplementary Tables). RNA extraction was carried out from both HepG2 cells and HepG2-CSCs using the RNA/DNA Isolation Kit (Beyotime, China). Following cell lysis with Trizol, RNA was separated and extracted using a washing solution. Subsequently, cDNA was synthesized through a reverse transcription system (details provided in Supplementary Tables). The cDNA concentration was adjusted using RNase-free water, and the RT-qPCR analysis was performed utilizing the BeyoFastTM Probe One-Step RT-qPCR Kit (Beyotime, China). Further information regarding the RT-qPCR reaction system and program can be found in Supplementary Tables. The relative mRNA expression levels were computed using the 2^-(∆∆Ct) method.
Results Quality Control and Normalization of scRNA-Seq data The scRNA-Seq data were collected from two independent studies. Upon data integration and application of the aforementioned criteria, tumor tissue samples were selected, resulting in a total of 54,674 cells derived from 16 tumor tissue samples, the detailed clinic parameters of enrolled patients can be found in Supplementary Tables. The characteristics of the scRNA-Seq data before and after filtering are presented in Figure S1 A and S1B, respectively. Utilizing tSNE for dimensionality reduction clustering, evident batch effects were observed among different samples, as illustrated in Figure S1 C. To address this issue, the scRNA-Seq data underwent normalization and sample integration using the Harmony method to correct for batch effects. Figure S1 D represents the sample features following batch effect correction. Dimensionality Reduction, Clustering and Cell Annotation Based on the scree plot results ( Figure S2 A), we determined that 30 principal components (PCs) were appropriate, and performed PCA on the quality-controlled data. The results of the clustering tree ( Figure S2 B) demonstrated that a resolution of 0.8 produced satisfactory clustering outcomes. Subsequently, we employed tSNE for dimensionality reduction clustering, resulting in the formation of 24 clusters ( Figure S2 C). Furthermore, we computed the cell cycle phase for each cell ( Figure S2 D) and integrated it with the normalized sample features ( Figure S2 E), revealing a minimal impact of the cell cycle on the dimensionality reduction clustering. Finally, differential expression gene analysis for the identified 24 clusters was conducted using the Wilcoxon rank-sum test, and the heatmap in Figure S2 F displays the expression levels of the top 5 DEGs. We initially identified six main cell types based on markers (Figure 1 A): B cells (IGKC, IGHG1, CD79A), endothelial cells (PECAM1, PLVAP, VWF), fibroblast cells (COL1A1, RGS5, ACTA2), hepatocytes (ALB, SERPINA1, RBP4), myeloid cells (CD68, S100A9, LYZ), and NK/T cells (CD3E, CD3D, NKG7). The expression levels of markers in cell types are shown in Figure 1 B, while Figure 1 C displays a tSNE density plot of marker expression. The proportion of the main cell types is illustrated in Figure 1 D, with NK/T cells (NK/T) being the most abundant, followed by myeloid cells (Mye) and hepatocytes (Hep), while fibroblast cells (Fib) and endothelial cells (Endo) are less prevalent. We performed differential expression gene analysis, resulting in the identification of DEGs for the six major cell clusters. The heatmap in Figure 1 E presents the expression levels of the top 50 DEGs. Subsequently, we conducted dimensionality reduction clustering for the six main cell types, and Figure S3 A-F shows the tSNE plots of clustering results for each cell types, providing annotations for the subtypes of the main cell types (Figure 1 A) along with their corresponding markers (Figure 2 A-F). The proportions of cell subtypes can be observed in Figure 2 G-L. Among NK/T cells, the proportion of exhausted T cells was the highest (Figure 2 L), accompanied by high expression of immune checkpoint genes such as BATF, TIGIT, and CTLA4 (Figure 2 F), suggesting immune evasion in the TME of HCC tissue. It is worth noting that due to the diverse functions of macrophages, an increasing number of macrophage subtypes have been discovered. Hence, in this study, we combined multiple studies to annotate macrophages into six subtypes (Figure 1 A), including interferon-primed macrophages associated with interferon response, lipid-associated macrophages linked to lipid metabolism, and three macrophage subtypes characterized by the high expression of specific genes: CXCL3+ macrophages, SPP1+ macrophages, and C1Q+ macrophages. Additionally, proliferative macrophages were exhibiting high proliferative activity. As there are numerous macrophage subtypes, the discrimination between them (Figure 2 E) was not as distinct as the annotation results for B cells (Figure 2 A), endothelial cells (Figure 2 B), and fibroblast cells (Figure 2 C). Regarding the annotation of hepatocyte types, the markers for normal hepatocytes were TF (Transferrin), ALB (Albumin), and APOB (Apolipoprotein B), which are important products of normal liver metabolisms. On the other hand, markers for malignant hepatocytes included F2 (Coagulation Factor II), ATP5F1E, and HP (Haptoglobin), all of which are indicative of abnormal metabolisms in malignant hepatocytes. Enrichment Analysis and Metabolic Analysis In the preceding sections, we conducted differential gene expression analysis for the major cell clusters. Using GO, KEGG, and Hallmark gene sets, we performed enrichment analysis employing the fGSEA and GSVA algorithms. The results of the enrichment analysis for the main cell types are presented in Figure 3 A-B. Our findings indicate significant enrichment of signaling pathways such as Wnt, TGF-β, and Hedgehog in Endothelial cells and Fibroblast cells, which are known to be involved in tumor development and progression. Moreover, Myeloid cells displayed significant enrichment in pattern recognition receptors (Toll-like receptor, RIG-I-like receptor, Nod-like receptor), cytokine-cytokine receptor interaction, and chemokine signaling pathways, suggesting their participation in both non-specific and specific immune responses. Additionally, Myeloid cells exhibited significant enrichment in the PPAR signaling pathway. Considering the annotation results of cell subtypes, we speculate that the PPAR signaling pathway plays a pivotal role in lipid metabolism within Lipid-associated macrophages. Hepatocyte types demonstrated notable enrichment in signaling pathways associated with nucleic acid, lipid, and protein metabolism, programmed cell death, and cell cycle. In the previous section, we classified the Hepatocyte cluster into four categories: Hepatocytes, Malignant hepatocytes, CSCs, and Cholangiocytes. We further conducted differential expression analysis for Malignant hepatocytes and CSCs, followed by enrichment analysis using DEGs. The fGSEA results (Figure 3 C-D) revealed significant differences between the two groups in protein, lipid, and nucleic acid metabolism pathways, as well as pathways related to cell cycle. The enrichment results from GSVA (Figure 3 E) provide a more comprehensive overview of the aforementioned findings. Notably, CSCs exhibited a significant association with Hedgehog and Hypoxia signaling pathways. In light of the preceding analysis, our investigation focused on exploring metabolic disparities among distinct subtypes of Hepatocytes type. Following the exclusion of the Cholangiocytes subtype, we performed scMetabolism analysis using the KEGG and REACTOME gene sets. The findings (Figure 3 F-G) revealed that normal hepatocytes manifested heightened activity across the majority of metabolic pathways to fulfill their physiological metabolic requirements. In contrast, Malignant hepatocytes exhibited elevated activity in the oxidative phosphorylation and glycolysis/gluconeogenesis pathways, while demonstrating moderate activity in other metabolic pathways. CSCs exhibited diminished overall metabolic activity but displayed increased engagement in nucleotide metabolism and glycolysis/gluconeogenesis, as well as involvement in inositol phosphate and phosphoinositide metabolism. Cell Communication Initially, we performed an analysis of ligand-receptor pair communication among the main cell types. The results, as depicted in Figure 4 A, revealed significant ligand-receptor interactions among the main cell types. Endothelial cells, Fibroblast cells, and Hepatocytes, serving as the main signal-outgoing cells, displayed highly active regulatory networks to B cells, NK/T cells, and Myeloid cells. Several cell communication pathways were identified among the following pairs: Endo-Fib, Endo-Mye, Fib-Endo, Fib-Hep, Fib-Mye, and Hep-Mye. The prominent ligand-receptor pairs implicated in these pathways encompassed MIF-CD74/CXCR4, MIF-CD74/CD44, MDK-NCL, MDK-SDC2, and MDK-SDC4. Through the application of NMF clustering for dimensionality reduction, we discerned distinct patterns of ligand-receptor pair communication among the main cell types (Figure 4 B-C). The findings suggested that Endothelial cells, Fibroblast cells, and Hepatocytes exhibited a shared pattern as signal-outgoing cells, characterized by primary ligands such as IFN-II, ANNEXIN, MIF, and CXCL. Conversely, B cells, NK/T cells, and Myeloid cells displayed a distinct pattern and were regulated by PARs, VISFATIN, and MK. In terms of the input mode, a distinct clustering pattern was observed solely between B cells and Hepatocytes, whereas the remaining clusters received signals through diverse modes. Significantly, Macrophage Migration Inhibitory Factor (MIF) and Midkine (MDK), as ligands, exerted a substantial influence on the communication networks of multiple cell types, with specific ligand-mediated cell communication networks illustrated in Figure 4 D-E. Subsequently, we isolated Malignant hepatocytes and CSCs for further analysis of cell communication. The results (Figure 4 F) revealed that Malignant hepatocytes and CSCs exhibited high activity in signal emission. Through different secreted factors, such as MIF and MDK, they participated in the regulation of various subtypes of Endothelial cells, Fibroblast cells, NK/T cells, and Myeloid cells, including C1Q+ macrophages (MIF-CD74/CXCR4, MIF-CD74/CD44, MDK-NCL, etc.), CXCL3+ macrophages (MIF-CD74/CXCR4, MIF-CD74/CD44, MDK-NCL, etc.), cancer-associated fibroblasts (CAFs) (F2-F2R, MDK-SDC2, MDK-NCL, etc.), Cytotoxic CD8+ cells (MIF-CD74/CXCR4, MIF-CD74/CD44, MDK-NCL, etc.), and Dendritic cells (MIF-CD74/CXCR4, MIF-CD74/CD44, etc.). CSCs exhibited higher activity than Malignant hepatocytes in signal emission, particularly in the regulation of C1Q+ macrophages and CXCL3+ macrophages. The cell communication network of CSCs is shown in Figure 4 G. Copy Number Variation and Cell Trajectory Analysis Using the inferCNV package in R (v1.16.0), we conducted CNV analysis. The results (Figure 5 A) revealed that CNV in normal hepatocytes was evenly distributed across chromosomes within the hepatocytes type. In contrast, malignant cells exhibited an uneven distribution of CNV across chromosomes, with significant variation among different cells. Notably, some malignant cells demonstrated lower CNV, indicating a high level of heterogeneity within the malignant cell population. However, the overall CNV score of malignant cells was significantly higher than that of normal cells (Figure 5 A), thereby supporting the rational annotation of hepatocyte subtypes presented in previous sections. To predict the cell developmental trajectory from CSCs to HCC and identify genes associated with evolution and development, we performed pseudotime analysis. The pseudotime trajectory plot generated using Monocle3 (Figure 5 B) revealed that CSCs served as the starting point of development for malignant hepatocytes. Along the pseudotime trajectory, malignant hepatocytes diversified into 12 distinct developmental subtypes, aligning with the results obtained from inferCNV and emphasizing the pronounced heterogeneity of malignant hepatocytes. Additionally, we conducted Monocle2 pseudotime analysis, and the findings are depicted in Figure 5 C-D. Through the utilization of a spatial differential gene algorithm, we calculated the Moran's I spatial autocorrelation for genes and ranked them based on Moran's I. Subsequently, we performed fGSEA enrichment analysis for genes associated with development (Moran's I > 0.5). The results (Figure 5 E) suggested the involvement of signaling pathways such as MAPK, PPAR, and leukocyte transendothelial migration in the regulation of CSCs development, particularly through pathways previously reported to be associated with p38 MAPK/HSPB1. Furthermore, we presented the expression profiles of selected genes highly correlated with development (Moran's I > 0.8) along the pseudotime trajectory (Figure 5 F). Among these genes, ADH4, ATP5F1E, GAGE12H, and IGLV2-14 exhibited a tendency for high expression during the mid-stage of development. RACK1 and AGXT exclusively displayed high expression in the late stage of development. Metabolism-related genes such as AHSG, ALB, and APOE gradually increased in expression during the mid-late stage of development, indicating their stronger association with abnormal tumor metabolism. VCX3A and HSPB1 showed elevated expression only in the early stage of development, implying their close correlation with CSCs development. The UMAP dimensionality reduction density plot illustrating the mentioned genes is presented in Figure 5 G. Weighted Gene Co-expression Network Analysis (WGCNA) We employed WGCNA to identify gene expression modules. The optimal power value of 5 was determined based on the scale-free topology fit index (>0.85) and mean connectivity (Figure 6 A). The clustering dendrogram and TOM heatmap of WGCNA are presented in Figure 6 B-C. Additionally, we generated heatmaps illustrating the correlation between modules and phenotypes (Figure 6 D) and between module eigengenes (MEs) and the expression levels of all genes (Figure 6 E). The analysis revealed that the MEturquoise module (R = 0.54, p -value = 2e-15) and the MEblue module (R = -0.46, p -value = 3e-11) exhibited the strongest correlation with CSCs. Notably, these two modules displayed a negative correlation with each other. Further examination of the heatmap depicting characteristic gene expression levels (Figure 6 F-G) uncovered that the MEturquoise module had a high expression in CSCs, whereas the MEblue module exhibited low expression. Construction and Validation of Clinical Prediction Model Using LASSO regression analysis (Figure 7 A) and conducting 10-fold cross-validation (Figure 7 B), we identified four genes associated with prognosis and established a risk prediction model as follows: Risk Score = Expression APCS × (-0.1036) + Expression FTH1 × (-0.2915) + Expression HSPB1 × (0.1010) + Expression ADH4 × (-0.0755). Subsequently, we incorporated the clinical features of the samples, including age, gender, and tumor stage. Furthermore, calibrated nomograms were obtained (Figure 7 C) through calibration curve validation. The nomograms exhibited predictive accuracies of 93.1%, 85.1%, and 76.3% for 1-year, 3-year, and 5-year overall survival (OS) predictions, respectively. The calibration curves demonstrated a strong agreement between the predicted and actual OS. Risk scores were calculated for each sample, and all samples, as well as the training and testing cohorts, were divided into Risk-High and Risk-Low groups based on the median risk score. Kaplan-Meier survival curves for OS in the different cohorts are presented in Figure 7 D-F, indicating significantly lower OS in the Risk-High group compared to the Risk-Low group. Consistent results were observed between the training and testing cohorts. The ROC curves (Figure 7 G-I) indicated that the model exhibited high predictive value. Heatmaps illustrating the expression levels of the four prognostic-related genes in all samples, the training cohort, and the testing cohort can be found in Figure S4 A-C. Additionally, the distribution of risk scores ( Figure S4 D-F) and survival status ( Figure S4 G-I) in different cohorts were presented. The RT-qPCR results (Figure 7 J-M) reveal a significant increase in the expression levels of APCS ( P = 0.0009), ADH4 ( P = 0.0011), and FTH1 ( P < 0.0001) in HepG2 cells when compared to CSCs. Conversely, the expression level of HSPB1 ( P = 0.0284) is decreased. These results align with the prognostic model.
Discussion Currently, the standard treatment for HCC has shifted towards a combination of immune therapy and targeted therapy. The IMbrave150 study has shown that the combination of atezolizumab and bevacizumab is now a first-line treatment option for unresectable, locally advanced, or metastatic hepatocellular carcinoma (uHCC) 35 . According to the latest ASCO 2023 research report 36 , combining a TIGIT inhibitor with atezolizumab and bevacizumab nearly doubled the median progression-free survival (11.1 months vs. 4.2 months), providing a promising new first-line treatment option for uHCC. Despite advancements in systemic treatment strategies for uHCC, the survival outcomes for patients with uHCC remain unfavorable. During systemic treatment, HCC is prone to developing drug resistance and metastasis, with CSCs playing a prominent role. CSCs contribute to the high heterogeneity of tumors, drug resistance, and recurrence due to their robust self-renewal capacity. Moreover, CSCs regulate various cells within the tumor microenvironment (TME), thereby altering cellular metabolism patterns and intercellular communication. These changes lead to immune evasion, T-cell exhaustion, and the establishment of a hypoxic TME. Our research findings highlight that CSCs not only modulate other cells, particularly macrophage subtypes, through specific ligand-receptor interactions to reshape the TME, but they also possess distinctive metabolic profiles that differ significantly from both tumor and normal cells. This suggests that metabolism plays a pivotal role in CSCs-mediated regulation of the TME. Furthermore, CSCs can promote the progression of HCC by activating specific signaling pathways. The influence of CSCs on the TME encompasses various factors, including physicochemical and metabolic characteristics, stromal cells, and immune cells 37 . Our research findings demonstrate that CSCs display heightened activity in hypoxic pathways, as hypoxia plays a critical role in maintaining the stem-like properties of tumor cells. Hypoxia contributes to the enhancement of stemness in HCC through HIF1α- and HIF2α-dependent mechanisms, while also supporting the persistence of CD24+ CSCs 38 . Additionally, hypoxia activates HIF1α via the IL-6/STAT3 signaling pathway, resulting in increased CD133 expression and the maintenance of tumor cell stemness 39 . Metabolic analysis further reveals that both CSCs and non-CSC HCC cells exhibit elevated activity in glycolysis/gluconeogenesis pathways. However, CSCs exhibit overall low glucose metabolism with noticeable glucose deprivation. Glucose deprivation is a significant physicochemical factor in maintaining the stemness of tumor cells. Studies suggest that glucose deprivation induces abnormal fucosylation of membrane glycoproteins mediated by FUT1, thereby sustaining tumor cell stemness through the AKT/mTOR/4E-BP1 signaling pathway 40 . Notably, CSCs and non-CSCs HCC demonstrate notable disparities in fatty acid metabolism, the PPAR signaling pathway, and the peroxisome proliferator-activated receptor pathway, indicating potential involvement of CSCs in lipid metabolism within the tumor tissue. Existing evidence suggests that the overexpression of OCTN2 enhances PGC-1α-mediated fatty acid oxidation and oxidative phosphorylation, thereby promoting the stemness of HCC 41 . Furthermore, abnormal lipid metabolism can also contribute to the maintenance of HCC stemness through inflammation and the activation of oncogenes 42 , 43 . Cell communication results indicate that CSCs primarily regulate the TME by secreting MIF (MIF-CD74/CXCR4, MIF-CD74/CD44) 44 , 45 or MDK (MDK-SDC2, MDK-NCL), which acts on stromal cells and immune cells, particularly myeloid cells and CAFs, thereby modulating the TME. CSCs can enhance the function of myeloid-derived suppressor cells (MDSCs) through the MIF/CXCR2 pathway, resulting in the mediation of an immunosuppressive microenvironment 46 . Additionally, the MIF/CXCR4 pathway is considered crucial for the recruitment of mesenchymal stem cells 47 . Single-cell transcriptomic studies have demonstrated that tumor cells can also affect CAFs 48 and endothelial cells 49 through the MDK-NCL pathway, contributing to an immunosuppressive microenvironment. Signaling pathways associated with stemness play a significant role in systemic treatment resistance 50 . Our findings suggest the presence of aberrantly activated Hedgehog and MAPK signaling pathways in CSCs. Studies have confirmed the involvement of the Hedgehog signaling pathway 51 , 52 and MAPK signaling pathway 53 , 54 in regulating CSC differentiation and drug resistance. Importantly, by combining Monocle3 pseudotime analysis and prognostic modeling, we have discovered that HSPB1 acts as a regulatory factor in CSCs development, and its high expression indicates a poor prognosis in HCC. Therefore, HSPB1 as downstream targets of the p38 MAPK signaling pathway and upstream activators of the NF-κB signaling pathway 55 , p38 MAPK/HSPB1/NF-κB is likely a key pathway involved in CSCs development, preliminary studies have provided initial support for this hypothesis 56 - 58 , and further validation is warranted. We focused on analyzing the interactions between CSCs and the HCC microenvironment. Although we did not conduct in-depth research on stromal cells and immune cells within the HCC microenvironment, we observed intriguing phenomena that warrant further investigation. These phenomena include the interactions between macrophages and CAFs, alterations in gene expression profiles and metabolism in exhausted T cells, and the role of B cells in the tumor microenvironment (TME). Recent studies have substantiated the significance of these phenomena in terms of research value 59 - 63 . Nevertheless, it is important to acknowledge the limitations of this study. Building upon the interesting findings, our future research will employ single-cell transcriptomics and spatial transcriptomics methodologies to delve deeper into these areas of inquiry.
Conclusions In this study, we utilized scRNA-Seq data from HCC tissues to elucidate the mechanisms underlying the interaction between CSCs and the TME through comprehensive omics analyses. These analyses encompassed cell communication, cell trajectories, and the activation of aberrant signaling pathways that provides a holistic understanding of the pivotal role played by CSCs in the TME. Moreover, by integrating bulk RNA-Seq data, we developed a clinical prognostic model. Additionally, our research shed light on intriguing phenomena involving stromal cells and immune cells within the TME, thereby necessitating further in-depth investigations.
Competing Interests: The authors have declared that no competing interest exists. Background: The challenge of systemic treatment for hepatocellular carcinoma (HCC) stems from the development of drug resistance, primarily driven by the interplay between cancer stem cells (CSCs) and the tumor microenvironment (TME). However, there is a notable dearth of comprehensive research investigating the crosstalk between CSCs and stromal cells or immune cells within the TME of HCC. Methods: We procured single-cell RNA sequencing (scRNA-Seq) data from 16 patients diagnosed with HCC. Employing meticulous data quality control and cell annotation procedures, we delineated distinct CSCs subtypes and performed multi-omics analyses encompassing metabolic activity, cell communication, and cell trajectory. These analyses shed light on the potential molecular mechanisms governing the interaction between CSCs and the TME, while also identifying CSCs' developmental genes. By combining these developmental genes, we employed machine learning algorithms and RT-qPCR to construct and validate a prognostic risk model for HCC. Results: We successfully identified CSCs subtypes residing within malignant cells. Through meticulous enrichment analysis and assessment of metabolic activity, we discovered anomalous metabolic patterns within the CSCs microenvironment, including hypoxia and glucose deprivation. Moreover, CSCs exhibited aberrant activity in signaling pathways associated with lipid metabolism. Furthermore, our investigations into cell communication unveiled that CSCs possess the capacity to modulate stromal cells and immune cells through the secretion of MIF or MDK, consequently exerting regulatory control over the TME. Finally, through cell trajectory analysis, we found developmental genes of CSCs. Leveraging these genes, we successfully developed and validated a prognostic risk model (APCS, ADH4, FTH1, and HSPB1) with machine learning and RT-qPCR. Conclusions: By means of single-cell multi-omics analysis, this study offers valuable insights into the potential molecular mechanisms governing the interaction between CSCs and the TME, elucidating the pivotal role CSCs play within the TME. Additionally, we have successfully established a comprehensive clinical prognostic model through bulk RNA-Seq data.
Supplementary Material
Thanks to JL for the technical support provided in this study. Funding This study was supported by the Shenzhen Science and Technology Innovation Commission General Program [grant number, JCYJ20210324111207020]. Ethics statement The HepG2 cell line originates from commercial sources (Beyotime, China). The scRNA-Seq and Bulk RNA-Seq data utilized in this study is exclusively obtained from publicly available sources, eliminating the need for ethical scrutiny. Data availability statement The single-cell RNA sequencing (scRNA-Seq) data generated and analysed during the current study are available in the Mendeley Data ( https://data.mendeley.com/datasets/skrx2fz79n/1 ) and the Gene Expression Omnibus (GEO) database ( https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE149614 ). The bulk RNA-Seq data generated and analysed during the current study are obtained from TCGA-LIHC ( https://portal.gdc.cancer.gov/repository ) and GEO database ( https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE76427 ). Author contributions Conceptualization, LP; resources, SL; methodology, SL; software, SL; formal analysis, DL and YY; validation, MY and RZ; data curation, MY and RZ; supervision, LP; writing-original draft, SL and DL; writing-review and editing, LP.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1093-1109
oa_package/d9/f5/PMC10788724.tar.gz
PMC10788725
0
Introduction Zn is a crucial trace element involved in various biochemical processes, and disruptions in intracellular Zn homeostasis have been associated with several pathological conditions. Previous studies have indicated the significance of Zn in antiviral activity and its role in respiratory viral infections 1 , as well as its involvement in cancer development and progression 2 . Dysregulated Zn levels are frequently observed in tumor tissues 3 , 4 . It was suggested that Zn may involve in cancer progression either by directly affecting cancer cells' proliferation and viability 5 or by regulating the tumor microenvironment 6 . Two major groups of proteins, ZNT (SLC30A) and ZIP (SLC39A), are involved in regulating cellular Zn homeostasis. The SLC30A family comprises ten members (SLC30A1-10) responsible for exporting Zn 2+ out of the cytoplasm, either to the extracellular environment or intracellular compartments 7 . In contrast, the ZIP family comprises 14 members (SLC39A1-14) that facilitate the import of Zn 2+ into the cytoplasm, leading to elevated cytosolic zinc concentrations 8 . Studies have consistently demonstrated that the expression of SLC30A/SLC39A family genes is closely linked to cancer progression. For example, differential expression of SLC30A1 and SLC30A6 has shown significant prognostic value in pancreatic cancer 9 . SLC39A4 shows significant upregulation in pancreatic cancer when compared to normal pancreatic tissues and has been proposed as a novel diagnostic marker for detecting the disease 10 . The knockdown of SLC39A4 in pancreatic cancer cells leads to a significant inhibition of cell proliferation, migration, and invasion, indicating its potential as a therapeutic target 11 . In lung cancer cells, silencing SLC39A4 can inhibit cell migration and enhance sensitivity of lung cancer cells to cisplatin 8 . The specific roles of SLC39A14 and SLC39A7 have also been well-documented in prostate cancer 12 and colorectal cancer cells 13 , respectively, emphasizing the importance of zinc transporters in detecting specific cancers, predicting patient prognosis, and developing new anticancer therapies. Given the critical roles of zinc transporters in cancer progression, this study aimed to systematically investigate the cancer-specific expression and prognostic value of these transporters through a pan-cancer analysis. The main objective was to explore the expression patterns of Zn homeostasis-related genes and their potential for predicting prognosis and developing therapeutic strategies for specific cancers. The analysis revealed that the expression of SLC39A1 , SLC39A4 , and SLC39A8 is tightly associated with the prognosis of LIHC, CESC, PAAD, and KIRP, respectively. Additionally, the mutation analysis indicated that mutations in the SLC39A4 gene have a significant and wide-ranging effect on DF, OS, and PFS in cancer patients, particularly in those with PAAD. Notably, SLC39A4 was also identified as a potential immunomodulator in PAAD due to its strong correlation with immune cell infiltration in this cancer type.
Materials and Methods Expression and prognostic value analysis of SLC30A/ SLC39A family genes in tumor tissues and cancer cell lines The TissueNexus database, which integrates RNA-seq data from 52,087 samples of 49 human tissues/cell lines, was employed to analyze the expression of SLC30A and SLC39A family genes across 49 tissues/cell lines 14 . This allowed for a comprehensive understanding of the expression patterns of SLC30A and SLC39A genes. To investigate the differential expression of these genes and their prognostic value in specific cancers, GEPIA online tool was employed. GEPIA provides a user-friendly interface for analyzing gene expression data and examining its correlation with patient survival. Additionally, the correlation between gene expression and patient survival was also verified through the UCSC Xena database, which contains comprehensive datasets for gene expression and clinical information. To further analyze the RNA-level expression of SLC30A and SLC39A family genes, the ALCAN database was utilized. To validate the survival curve, the standardized expression profile of pan-cancer in the UCSC Xena database and the corresponding clinical data were used. The standardized expression profile and clinical data of pancancer from the UCSC Xena data frame was downloaded. Tidyverse package was used to integrate the data, survminer and survival package were downloaded to make survival curves, and ggsurvplot package in R was used for data visualization. Biological functions investigation of SLC39A1 , SLC39A4 , and SLC39A8 by Gene Set Enrichment Analysis The LinkedOmics database was utilized to investigate the co-expression genes associated with SLC39A1 , SLC39A4 , and SLC39A8 . The LinkedOmics database integrates clinical data from 32 cancers and includes information from 11,158 patients 15 . To gain further insights into the biological functions of these co-expression genes, the GO and KEGG pathway enrichment analyses was performed through the DAVID database. Genomic alterations of SLC39A1 , SLC39A4 , and SLC39A8 in cancers The mutation status of SLC39A1 , SLC39A4 , and SLC39A8 in various cancers and the impact of these mutations on clinical outcomes was analyzed by utilizing the cBioPortal database. The cBioPortal is a widely used web-based platform that allows for the exploration and analysis of cancer genomics data and provides valuable information on the genetic landscape of these zinc transporter genes in cancer and their potential impact on patient prognosis. Association analysis of SLC39A1 , SLC39A4 , and SLC39A8 expression with immune cell infiltration in cancers The TIMER database allows for the assessment of immune cell infiltration and the correlation with gene expression profiles in different cancer types, and was utilized to explore the correlations between the expression of SLC39A1 , SLC39A4 , and SLC39A8 genes and immune cell infiltration in various cancers. qRT-PCR analysis The differential expression of SLC39A1 and SLC39A4 in related normal cells and tumor cells was further evaluated by qRT-PCR. TRIzol was used to extract the total RNA. They were fluorescently stained with SYBR dye, and GAPDH was employed as the internal control. In addition, the expression of SLC39A4 was examined at the RNA level in para-cancerous and tumor tissues of pancreatic cancer patients. Human tissue samples were provided by the Zhejiang Provincial People's Hospital under an approved protocol by the local medical ethics committee (2023-068). All patients were required to provide written informed consent. The following are the primers sequences: SLC39A1: Forward: 5'-GCTGTTGCAGAGCCACCTTA-3', Reverse: 5'-CATGCCCTCTAGCACAGACTG-3', SLC39A4: Forward: 5'-TGGTCTCTACGTGGCACTC-3', Reverse: 5'-GGGTCCCGTACTTTCAACATC-3', GAPDH: Forward: 5'-AACGGATTTGGTCGTATTGG-3', Reverse: 5'-TTGATTTTGGAGGGATCTCG-3'. Statistical analysis Statistical differences between two groups were tested using the unpaired two-tailed t-test in Microsoft Excel. Statistical differences between more than two groups were tested using single factor analysis of variance (ANOVA) with Tukey's post-hoc HSD test. Differences were considered significant with a p-value < 0.05. Figures were prepared with GraphPad Prism 6 and are given as mean ± standard deviation (mean ± SD).
Results Expression of SLC30A and SLC39A family genes in human tissues and cell lines To investigate the tissue specificity of SLC30A and SLC39A family gene expression, we utilized the TissueNexus database. The analysis revealed that several genes within the SLC30A and SLC39A families, namely SLC30A1 , SLC30A5 , SLC30A6 , SLC30A7 , SLC30A9 , SLC39A1 , SLC39A3 , SLC39A7 , SLC39A9 , SLC39A10 , SLC39A11 , SLC39A13 , and SLC39A14 , exhibit widespread expression in various tissues and cell lines such as the brain, intestine, liver, bladder, lung, pancreas, kidney, prostate, and breast (Figure 1 ). Conversely, SLC30A3 , SLC30A4 , SLC30A10 , SLC39A2 , and SLC39A5 show limited or relatively low expression levels across most analyzed tissues and cell lines. Notably, our findings indicate that SLC30A4 displays specific high expression in the prostate, while SLC39A2 is exclusively expressed in the intestine (Figure 1 ), suggesting potential tissue-specific expression patterns for SLC30A4 and SLC39A2 . However, it should be mentioned that SLC30A2 , SLC30A8 , and SLC39A12 were not included in this analysis due to unavailable information in the TissueNexus database. Furthermore, we examined the protein-level expression of SLC30A and SLC39A family genes in 33 types of cancer as well as their corresponding adjacent tissues using GEPIA online tool. Our analysis demonstrated significant overexpression of SLC39A1 (Supplementary Figure S1 ) in brain lower grade glioma (LGG), lymphoid neoplasm diffuse large B-cell lymphoma (DLBC), glioblastoma multiforme (GBM), testicular germ cell tumors (TGCT), thymoma (THYM), LIHC, and PAAD. Conversely, SLC39A1 exhibited significant downregulation in KICH. The upregulation of SLC39A4 (Supplementary Figure S1 ) was observed in bladder urothelial carcinoma (BLCA), breast invasive carcinoma (BRCA), CESC, colon adenocarcinoma (COAD), DLBC, esophageal carcinoma (ESCA), head and neck squamous cell carcinoma (HNSC), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LUSC), ovarian serous cystadenocarcinoma (OV), PAAD, rectum adenocarcinoma (READ), stomach adenocarcinoma (STAD), THYM, uterine corpus endometrial carcinoma (UCEC), and uterine carcinosarcoma (UCS). However, downregulation of SLC39A4 was observed in kidney renal clear cell carcinoma (KIRC), acute myeloid leukemia (LAML), and LGG. Additionally, SLC39A8 (Supplementary Figure S1 ) exhibited significant overexpression in adrenocortical carcinoma (ACC), COAD, DLBC, ESCA, glioblastoma multiforme (GBM), KIRP, READ, STAD, TGCT, THYM, and UCEC, while its downregulation was observed in LUAD and LUSC. The differential expression profiles of other SLC30A and SLC39A family members, excluding SLC39A1 , SLC39A4 , and SLC39A8 , across the 33 types of cancer were also analyzed, and the results are provided in Supplementary Figure S2 -S3. Prognostic value assessment of SLC30A and SLC39A family genes in pan-cancer Subsequently, we evaluated the prognostic value of differentially expressed genes from the SLC30A and SLC39A families in the respective cancers. Initially, we analyzed the association between overall survival (OS) and the expression of SLC30A and SLC39A family genes across the 33 types of cancers. Notably, we observed a significant association between OS and the expression of SLC39A1 in LIHC, SLC39A4 in CESC and PAAD, and SLC39A8 in KIRP. As depicted in Figure 2 A, we performed a prognosis analysis of the SLC30A and SLC39A protein families in clinical patients using the GEPIA online tool. The results indicated that high expression of SLC39A1 in LIHC (p=0.0042), high expression of SLC39A4 in CESC (p=0.035) and PAAD (p=0.021), however low expression of SLC39A8 in KIRP (p=0.0025), were associated with lower overall survival rates. These findings suggest the potential of SLC39A1 , SLC39A4 , and SLC39A8 as prognostic markers for LIHC, CESC/PAAD, and KIRP, respectively. To further validate the correlation between the expression levels of SLC39A1 , SLC39A4 , and SLC39A8 and overall survival in LIHC, CESC/PAAD, and KIRP patients, we conducted Kaplan-Meier analysis using patient cohorts obtained from the TCGA database. Patient information is displayed in Supplementary Table S1 . The results (Figure 2 B) were consistent with the previous analyses, demonstrating that high levels of SLC39A1 and SLC39A4 predicted poor overall survival in LIHC (p = 0.0016) and CESC (p = 0.064)/PAAD (p = 0.021), respectively, while high levels of SLC39A8 predicted favorable overall survival in KIRP (p = 0.001). The differential expression of SLC39A1 , SLC39A4 , and SLC39A8 in normal and tumoral cell/tissues were validated using different data sources and laboratory experiments Based on the above findings, we further validated the expression of SLC39A1 , SLC39A4 , and SLC39A8 in LIHC, CESC/PAAD, and KIRP using different data resources. The results (Figure 2 C) demonstrated that SLC39A1 was highly expressed in LIHC patients, SLC39A4 was highly expressed in CESC and PAAD patients, and SLC39A8 was highly expressed in KIRP patients. Additionally, the mRNA expression of these genes was analyzed using the UALCAN database, which corroborated the protein-level expression analyses (Figure 2 D). Moreover, we conducted quantitative reverse transcription-polymerase chain reaction (qRT-PCR) analyses to corroborate the discernible variations in the expression of SLC39A1 within liver cancer cells (including primary human hepatocytes L-O2, HuH7, and LM3) and SLC39A4 in pancreatic cancer cells (comprising human pancreatic duct epithelial cell line H6c7, PANC-1, and SW1990). Additionally, the expression levels of SLC39A4 in paracancerous and tumorous tissues derived from pancreatic cancer patients were (Table 1 ) meticulously authenticated. Our findings, as illustrated in Figure 2 E, unequivocally demonstrate that SLC39A1 exhibits significantly elevated expression in liver cancer cells compared to normal liver cells. Simultaneously, SLC39A4 manifests a notable upregulation in both pancreatic cancer cells and associated tissues. These results substantially reinforce the validity of our earlier observations. Genetic alteration analysis of SLC39A1 , SLC39A4 , and SLC39A8 genes in specific cancer In light of the aberrant expression and significant roles played by SLC39A1 , SLC39A4 , and SLC39A8 genes in specific cancers, as well as the link between genetic alterations and cancer development, we further investigated the genetic alterations of these genes using the cBioPortal database. The analysis revealed that the mutation frequencies of SLC39A1 , SLC39A4 , and SLC39A8 in pan-cancer patients were 4%, 6%, and 0.9%, respectively (Figure 2 A). The primary types of genetic alterations observed for the genes SLC39A1 , SLC39A4 , and SLC39A8 were amplification, missense mutation, and deep deletion. Notably, the genetic alteration rate of SLC39A1 in LIHC reached 11%, while the rates of SLC39A4 in CESC and PAAD were 3% and 10%, respectively, primarily characterized by amplification. Conversely, SLC39A8 exhibited the lowest genetic alteration rate in KIRP, at only 1.1%, predominantly with missense mutations (Figure 2 B). We further analyzed the correlation between specific genetic alterations of these genes and patients' prognosis in pan-cancer. According to the Kaplan-Meier plotter analysis, among the 33 types of cancers examined, genetic mutation of SLC39A1 was not found to have a significant association with prognosis of cancer patients (Supplementary Figure S4 A). However, genetic mutation of SLC39A4 had a significant impact on DFS and PFS in cancer patients (p-values: 2.718e-6 and 7.562e-4, respectively) (Supplementary Figure S4 B). Additionally, genetic alteration of SLC39A8 was closely correlated with OS in cancer patients (p-value: 0.0298) (Supplementary Figure S4 C). Specifically, in LIHC, genetic mutation of SLC39A1 showed no significant correlation with patients' prognosis (Figure 2 C). However, mutations in SLC39A4 had a significant effect on the OS, DFS, and PFS of PAAD patients, although no impact was observed in patients with CESC (Figure 2 D-E). Due to the extremely small sample size, the correlation between genetic alteration of SLC39A8 in KIRP and patients' prognosis could not be analyzed. These findings collectively indicate that genetic alteration of SLC39A4 in PAAD may have a certain degree of impact on the prognosis of patients with pancreatic cancer. Biological function of SLC39A1 , SLC39A4 , and SLC39A8 in related cancers To gain further insights into the biological functions of SLC39A1 , SLC39A4 , and SLC39A8 in cancers, we utilized the LinkedOmics database to analyze their co-expression profiles in LIHC, CESC/PAAD, and KIRP, respectively. As a result, we obtained 19,921 genes related to SLC39A1 in LIHC, 19,903 genes related to SLC39A4 in CESC, 19,773 genes related to SLC39A4 in PAAD, and 29,923 genes related to SLC39A8 in KIRP (Figure 3 A). The heat maps in Supplementary Figure S5 display the top 50 genes that positively correlate with SLC39A1 , SLC39A4 , and SLC39A8 . We performed KEGG pathway enrichment analysis using the DAVID database with the top 20 positively correlated genes (P value < 0.001). In LIHC, the co-expression genes of SLC39A1 were predominantly enriched in the Relaxin, Rap1, and FOXO signaling pathways (Figure 3 B). In CESC, the genes associated with SLC39A4 were primarily enriched in the PD-L1, Thyroid hormone, and T cell receptor signaling pathways (Figure 3 C). Similarly, in PAAD, the SLC39A4 -related genes were enriched in the same pathways (Figure 3 D). Furthermore, the genes related to SLC39A8 in KIRP were mainly involved in metabolic processes and the MAPK signaling pathway (Figure 3 E). These findings indicate that SLC39A1 , SLC39A4 , and SLC39A8 may play crucial roles in specific cancers by participating in various cellular processes and pathways. However, ZIP proteins also play an important role in other cancers. ZIP1 is associated with chemotherapy resistance in lung cancer 16 . It has antiproliferative effects on Prostate cancer 17 as well as effects on invasion and migration. In our study, ZIP1 serves as a potential prognostic marker in hepatocellular carcinoma. ZIP4 acts as an important regulator of the Snail-N-cadherin signaling axis in promoting non-small cell lung cancer progression 18 . It has been shown that the expression level of ZIP4 is negatively correlated with the survival rate of hepatocellular carcinoma 19 . In colon cancer, high expression of ZIP4 is associated with poorer prognosis in stage I-III patients 20 . In the study herein, high expression of ZIP4 was highly correlated with low OS in pancreatic cancer. It has been claimed that ZIP8 is an important regulator of neuroblastoma cell proliferation and migration 21 . However, in our study, ZIP8 was also found to be a potential prognostic marker for papillary cell carcinoma of the kidney. Correlation analyses of SLC39A1, SLC39A4, and SLC39A8 expression with immune-related biomarker and immune cell infiltration in cancers Zn also plays a critical role in immunity as a catalytic and structural cofactor. Current studies have shown that zinc ions can regulate the function of T cells, monocytes, and macrophages, and can modulate immune responses through signaling pathways such as NF-κB 22 , inspiring us to investigate the associations of Zn transporters expression and immune infiltration in related cancer. To investigate the associations between the expression of SLC39A1 , SLC39A4 , and SLC39A8 and immune infiltration in related cancers, we utilized the TIMER database. We conducted an analysis to examine the relationship between the expression of these genes and the infiltration levels of immune cells, including B cells, CD4+/CD8+ T cells, myeloid dendritic cells, macrophages, and neutrophils. In LIHC (Figure 4 A), SLC39A1 expression showed a positive correlation with the infiltration levels of B cells (rho = 0.265, p = 5.75e-07), myeloid dendritic cells (rho = 0.415, p = 8.44e-16), and neutrophils (rho = 0.391, p = 4.44e-14). However, it was negatively correlated with the infiltration level of macrophages (rho = -0.442, p = 6.01e-18). In CESC and PAAD (Figure 5 A and 6 A), SLC39A4 expression was negatively correlated with the infiltration of B cells, CD4 + /CD8 + T cells, myeloid dendritic cells, macrophages, and neutrophils. Conversely, in KIRP (Figure 7 A), the expression of SLC39A8 exhibited a positive correlation with the infiltration levels of CD4 + /CD8 + T cells, myeloid dendritic cells, and neutrophils, however, it showed a negative correlation with the infiltration levels of B cells and macrophages. We further analyzed the correlation between the expression of SLC39A1 , SLC39A4 , and SLC39A8 and immune-related factors, including immune-stimulators, immune-inhibitors, chemokines, and chemokine receptors. In LIHC (Figure 4 B-E), SLC39A1 expression exhibited positive correlations with several immune factors, such as TNFSF9, TNFSF4, TNFRSF4, CD276, VTCN1, IL10RB, LGALS9, TGFBR1, CXCL1, CXCL8, CCL20, CCL26, TPA1, TPA2, TAPBP, and HLA-A. In CESC (Figure 5 B-E), SLC39A4 expression showed positive correlations with immune factors like NT5E, TNFRSF9, PVR, CD276, PVRL2, VTCN1, TGFB1, IL10RB, CXCL2, CXCL3, CCL15, CCL28, TAPBP, HLA-DOA, HLA-F, and HLA-DMA. Similarly, in PAAD (Figure 6 B-E), SLC39A4 expression displayed positive correlations with immune factors such as TNFRSF18, TNFRSF14, TNFRSF25, TNFSF9, IL10RB, TGFB1, PVRL2, LGALS9, TNFRSF25, CXCL3, CXCL16, CXCL17, TAPBP, TAP2, HLA-C, and HLA-A. Notably, SLC39A4 expression in PAAD showed a strongly negative association with most of the immune-inhibitors, indicating its potential role in regulating immune responses in PAAD. In KIRP (Figure 7 B-E), SLC39A8 expression positively correlated with immune factors such as CD40, CD70, HLA-A2, TNFSF13, VTCN1, CD160, HAVCR2, PVRL12, CCL21, CCL2, CCL11, CXCL12, HLA-DRB1, HLA-DOA, HLA-DPA1, and HLA-DRA. These findings suggest that SLC39A1 , SLC39A4 , and SLC39A8 may play a role in modulating the immune environment in specific cancers. Their expression levels are associated with the infiltration of immune cells and correlate with immune-related factors, highlighting their potential as therapeutic targets in cancer treatment.
Discussion and Conclusion This study presents an investigation into the potential role of zinc transporters, specifically the SLC39A (ZIP) and SLC30A (ZNT) families, in cancer progression, along with their potential as prognostic markers and therapeutic targets. Zinc, as an essential trace element, plays a vital role in various biological activities, including structural stability, biocatalysis, and signal regulation 23 , 24 . Maintaining zinc homeostasis within the body is of utmost importance, as both zinc deficiency and excess can have detrimental effects on human health. Among the proteins associated with zinc homeostasis, zinc transporters, including members of the SLC39A and SLC30A families, play a vital role 25 . These transporters facilitate the translocation of zinc ions in different directions, thereby maintaining the delicate balance of intracellular zinc ions. Aberrant expression of zinc transporters has been implicated in cancer progression, highlighting their potential significance in this context. In this study, we conducted a comprehensive analysis of SLC30A and SLC39A family gene expression and mutation patterns across various cancer types. Moreover, the study explored the association between the expression levels of SLC39A1 , SLC39A4 , and SLC39A8 and both prognosis and immune cell infiltration in respective tumors. Notably, SLC39A1 demonstrated potential prognostic value in LIHC, while SLC39A4 exhibited prognostic implications in CESC and PAAD. Clinically, liver transplantation is the only curative method and the 5-year survival rate after surgery is about 60% 26 . It has been reported that the knockdown of SLC39A1 can inhibit the proliferation of hepatocellular carcinoma cells and reduce the expression of cell cycle-related proteins 27 , highly suggesting the vital role of SLC39A1 in the progression of liver cancer. Besides, the previous study has shown that SLC39A4 can be a prognostic marker for CESC 28 , in agreement with our analysis in this study. The expression of SLC39A4 was also linked to chemotherapeutic response in CESC. It was found that the knockdown of SLC39A4 can significantly improve the sensitivity of CESC to cisplatin treatment 27 . Moreover, we found that the genetic alteration rate of SLC39A4 in PAAD is up to 10% and it is highly associated with a reduced OS in PAAD patients. This may partly explain the poor prognosis of PAAD patients. In contrast, SLC39A8 displays the lowest mutation rate in cancers compared to SLC39A1 and SLC39A4 . The mutated SLC39A8 is associated with a poor OS, disease-specific survival, and DFS of KIRP patients, indicating that the mutated SLC39A8 may contribute to the increased mortality of this disease. It was previously reported that in plants, SLC39A1 (ZIP1) acts as an immune signal peptide that activates cysteine proteases (PLCPs) to trigger the plant immune system and enhance plant resistance to pathogens 29 . SLC39A8 was found to be specifically upregulated in CD4 + T cells that infiltrate inflamed joints. The deficiency of SLC39A8 in CD4 + T cells resulted in the abolishment of collagen-induced arthritis 30 , suggesting a critical role for SLC39A8 in the development or progression of this autoimmune disease. This study revealed intricate relationships between the expression of zinc transporters and immune cell infiltration. For instance, SLC39A1 and SLC39A8 expression positively correlated with the infiltration of CD4 + T cells, neutrophils, and myeloid dendritic cells, along with the expression or release of immunosuppressants and activators. Conversely, SLC39A4 expression in CESC and PAAD exhibited negative associations with the infiltration of CD4 + /CD8 + T cells, B cells, myeloid dendritic cells, neutrophils, and macrophages. Notably, prior studies have reported the involvement of SLC39A4 mutations, resulting in zinc deficiency and immune dysfunction 31 . Therefore, a comprehensive investigation is warranted to comprehend the precise mechanisms by which zinc transporters modulate immune responses and the tumor microenvironment during cancer progression. The GO and KEGG enrichment analyses conducted in our study demonstrated the enrichment of zinc transporter-related genes in various biological processes, including cell metabolism, cell cycle regulation, and other essential processes. These findings are consistent with previous studies that have highlighted the importance of zinc transporters in these cellular processes 27 , 32 , 33 . However, additional pathological conditions afflicting patients may exert a substantial influence on the expression of zinc transporters. The human brain, being the organ with the highest zinc content, is particularly susceptible to perturbations in zinc concentration. An elevation in zinc levels within the brain can lead to neurotoxicity, while zinc deficiency is associated with various pathological manifestations, including malformations within the central nervous system. Investigation into postmortem brain tissues of individuals with Alzheimer's disease (AD) revealed heightened mRNA levels of the Zn 2+ transporter protein ZIP1 34 . Such alterations in expression may not only reflect but also potentially contribute to modifications in cortical Zn 2+ distribution in AD. Besides, studies have demonstrated that mutations in SLC39A4/ZIP4 result in acrodermatitis enteropathica. Certain mutations in mouse ZIP4, induced by acrodermatitis enteropathica, have been observed to impede plasma membrane transport. In specific mutants, ZIP4 tends to accumulate in the apical membrane, where diminished zinc uptake activity is evident due to a reduction in Vmax uptake 35 . The indispensability of ZIP8 for the maintenance of normal liver function is underscored by findings indicating that moderate or acute reductions in ZIP8 activity induce pathological changes in the liver 36 . Notably, the Zn 2+ transporter protein ZIP8 exhibits specific upregulation in chondrocytes associated with osteoarthritis (OA), leading to heightened intracellular Zn 2+ levels. This ZIP8-mediated Zn 2+ efflux subsequently triggers an upregulation of chondrocyte matrix-degrading enzyme expression 37 . Therefore, when exploring the relationship between zinc transporter expression and certain cancers, other diseases that may significantly affect zinc transporter expression should be better considered. In conclusion, this study underscores the potential significance of zinc transporters, particularly SLC39A1 , SLC39A4 , and SLC39A8 , as prognostic markers and therapeutic targets in various cancers. However, further validation and in-depth research are imperative to fully elucidate the underlying mechanisms and clinical implications of dysregulated zinc transporter expression in cancer. While this study provides valuable insights into the roles of zinc transporters in cancer, it is important to acknowledge its limitations, including the need for explicit exploration of the correlation between zinc ion concentration and zinc transporters in specific cancer types, as well as further validation of the observed associations using larger sample sizes and additional experimental and clinical investigations. Nonetheless, the findings underscore the potential of zinc transporters, particularly SLC39A1 , SLC39A4 , and SLC39A8 , as promising prognostic markers and therapeutic targets in the field of oncology.
Discussion and Conclusion This study presents an investigation into the potential role of zinc transporters, specifically the SLC39A (ZIP) and SLC30A (ZNT) families, in cancer progression, along with their potential as prognostic markers and therapeutic targets. Zinc, as an essential trace element, plays a vital role in various biological activities, including structural stability, biocatalysis, and signal regulation 23 , 24 . Maintaining zinc homeostasis within the body is of utmost importance, as both zinc deficiency and excess can have detrimental effects on human health. Among the proteins associated with zinc homeostasis, zinc transporters, including members of the SLC39A and SLC30A families, play a vital role 25 . These transporters facilitate the translocation of zinc ions in different directions, thereby maintaining the delicate balance of intracellular zinc ions. Aberrant expression of zinc transporters has been implicated in cancer progression, highlighting their potential significance in this context. In this study, we conducted a comprehensive analysis of SLC30A and SLC39A family gene expression and mutation patterns across various cancer types. Moreover, the study explored the association between the expression levels of SLC39A1 , SLC39A4 , and SLC39A8 and both prognosis and immune cell infiltration in respective tumors. Notably, SLC39A1 demonstrated potential prognostic value in LIHC, while SLC39A4 exhibited prognostic implications in CESC and PAAD. Clinically, liver transplantation is the only curative method and the 5-year survival rate after surgery is about 60% 26 . It has been reported that the knockdown of SLC39A1 can inhibit the proliferation of hepatocellular carcinoma cells and reduce the expression of cell cycle-related proteins 27 , highly suggesting the vital role of SLC39A1 in the progression of liver cancer. Besides, the previous study has shown that SLC39A4 can be a prognostic marker for CESC 28 , in agreement with our analysis in this study. The expression of SLC39A4 was also linked to chemotherapeutic response in CESC. It was found that the knockdown of SLC39A4 can significantly improve the sensitivity of CESC to cisplatin treatment 27 . Moreover, we found that the genetic alteration rate of SLC39A4 in PAAD is up to 10% and it is highly associated with a reduced OS in PAAD patients. This may partly explain the poor prognosis of PAAD patients. In contrast, SLC39A8 displays the lowest mutation rate in cancers compared to SLC39A1 and SLC39A4 . The mutated SLC39A8 is associated with a poor OS, disease-specific survival, and DFS of KIRP patients, indicating that the mutated SLC39A8 may contribute to the increased mortality of this disease. It was previously reported that in plants, SLC39A1 (ZIP1) acts as an immune signal peptide that activates cysteine proteases (PLCPs) to trigger the plant immune system and enhance plant resistance to pathogens 29 . SLC39A8 was found to be specifically upregulated in CD4 + T cells that infiltrate inflamed joints. The deficiency of SLC39A8 in CD4 + T cells resulted in the abolishment of collagen-induced arthritis 30 , suggesting a critical role for SLC39A8 in the development or progression of this autoimmune disease. This study revealed intricate relationships between the expression of zinc transporters and immune cell infiltration. For instance, SLC39A1 and SLC39A8 expression positively correlated with the infiltration of CD4 + T cells, neutrophils, and myeloid dendritic cells, along with the expression or release of immunosuppressants and activators. Conversely, SLC39A4 expression in CESC and PAAD exhibited negative associations with the infiltration of CD4 + /CD8 + T cells, B cells, myeloid dendritic cells, neutrophils, and macrophages. Notably, prior studies have reported the involvement of SLC39A4 mutations, resulting in zinc deficiency and immune dysfunction 31 . Therefore, a comprehensive investigation is warranted to comprehend the precise mechanisms by which zinc transporters modulate immune responses and the tumor microenvironment during cancer progression. The GO and KEGG enrichment analyses conducted in our study demonstrated the enrichment of zinc transporter-related genes in various biological processes, including cell metabolism, cell cycle regulation, and other essential processes. These findings are consistent with previous studies that have highlighted the importance of zinc transporters in these cellular processes 27 , 32 , 33 . However, additional pathological conditions afflicting patients may exert a substantial influence on the expression of zinc transporters. The human brain, being the organ with the highest zinc content, is particularly susceptible to perturbations in zinc concentration. An elevation in zinc levels within the brain can lead to neurotoxicity, while zinc deficiency is associated with various pathological manifestations, including malformations within the central nervous system. Investigation into postmortem brain tissues of individuals with Alzheimer's disease (AD) revealed heightened mRNA levels of the Zn 2+ transporter protein ZIP1 34 . Such alterations in expression may not only reflect but also potentially contribute to modifications in cortical Zn 2+ distribution in AD. Besides, studies have demonstrated that mutations in SLC39A4/ZIP4 result in acrodermatitis enteropathica. Certain mutations in mouse ZIP4, induced by acrodermatitis enteropathica, have been observed to impede plasma membrane transport. In specific mutants, ZIP4 tends to accumulate in the apical membrane, where diminished zinc uptake activity is evident due to a reduction in Vmax uptake 35 . The indispensability of ZIP8 for the maintenance of normal liver function is underscored by findings indicating that moderate or acute reductions in ZIP8 activity induce pathological changes in the liver 36 . Notably, the Zn 2+ transporter protein ZIP8 exhibits specific upregulation in chondrocytes associated with osteoarthritis (OA), leading to heightened intracellular Zn 2+ levels. This ZIP8-mediated Zn 2+ efflux subsequently triggers an upregulation of chondrocyte matrix-degrading enzyme expression 37 . Therefore, when exploring the relationship between zinc transporter expression and certain cancers, other diseases that may significantly affect zinc transporter expression should be better considered. In conclusion, this study underscores the potential significance of zinc transporters, particularly SLC39A1 , SLC39A4 , and SLC39A8 , as prognostic markers and therapeutic targets in various cancers. However, further validation and in-depth research are imperative to fully elucidate the underlying mechanisms and clinical implications of dysregulated zinc transporter expression in cancer. While this study provides valuable insights into the roles of zinc transporters in cancer, it is important to acknowledge its limitations, including the need for explicit exploration of the correlation between zinc ion concentration and zinc transporters in specific cancer types, as well as further validation of the observed associations using larger sample sizes and additional experimental and clinical investigations. Nonetheless, the findings underscore the potential of zinc transporters, particularly SLC39A1 , SLC39A4 , and SLC39A8 , as promising prognostic markers and therapeutic targets in the field of oncology.
* These authors contributed equally to this work as first authors. Competing Interests: The authors have declared that no competing interest exists. The disruption of zinc (Zn) homeostasis has been implicated in cancer development and progression through various signaling pathways. Maintaining intracellular zinc balance is crucial in the context of cancer. Human cells rely on two families of transmembrane transporters, SLC30A/ZNT and SLC39A/ZIP, to coordinate zinc homeostasis. While some ZNTs and ZIPs have been linked to cancer progression, limited information is available regarding the expression patterns of zinc homeostasis-related genes and their potential roles in predicting prognosis and developing therapeutic strategies for specific cancers. In this study, a systematic analysis was conducted to examine the expression of all genes from the SLC30A and SLC39A families at both mRNA and protein levels across different cancers. As a result, three SLC39A genes ( SLC39A1 , SLC39A4 , and SLC39A8 ) were found to be significantly dysregulated in specific cancers, including cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC), liver hepatocellular carcinoma (LIHC), pancreatic adenocarcinoma (PAAD), and kidney renal papillary cell carcinoma (KIRP). Moreover, the dysregulation of these genes was tightly associated with the prognosis of patients with those cancers. Furthermore, we found that the gene SLC39A8 exhibited the lowest mutation frequency in KIRP, whereas mutations in SLC39A4 were found to significantly impact overall survival (OS), disease-free (DF), and progress-free survival (PFS) in cancer patients, particularly in those with PAAD. Additionally, immune infiltration analysis revealed that SLC39A1 , SLC39A4 , and SLC39A8 may function as immune regulators in cancers. This provides new insights into understanding the complex relationship between zinc homeostasis and cancer progression.
Supplementary Material
This work was supported by the National Natural Science Foundation of China (82104207), Zhejiang Provincial Natural Science Foundation of China (LQ22H280001), Zhejiang Provincial Medical and Health Science and Technology Program (2023KY1008), and China Postdoctoral Science Foundation Funded Project (2023M733163). Author contributions All authors contributed to this study. Yanfen Liu and Lu Wei: Formal analysis, Investigation, Visualization, Writing-Original draft; Zhiyu Zhu, Shuyi Ren, Haiyang Jiang, Yufei Huang, and Xiaoyu Sun: Resources, Technical support, Writing-Review & Editing; Xinbing Sui, Lijun Jin, and Xueni Sun: Conceptualization, Writing-Review & Editing, Supervision, Funding acquisition. All authors have read and agreed to the published version of the manuscript. Abbreviations SLC30A SLC39A Cervical squamous cell carcinoma and endocervical adenocarcinoma Liver hepatocellular carcinoma Pancreatic adenocarcinoma Kidney renal papillary cell carcinoma Overall Survival Disease-Free Progress-free survival Gene Expression Profiling Interactive Analysis University of California, Santa Cruz The University of Alabama at Birmingham Gene Ontology Kyoto Encyclopedia of Genes and Genomes The Cancer Genome Atlas Program the Database for Annotation, Visualization, and Integrated Discovery Lower-grade glioma Diffuse large B-cell lymphoma Glioblastoma multiforme Testicular germ cell tumors Bladder urothelial carcinoma Breast invasive carcinoma Colon adenocarcinoma Esophageal carcinoma Lung adenocarcinoma Lung squamous cell carcinoma Ovarian serous cystadenocarcinoma Rectum adenocarcinoma Stomach adenocarcinoma Uterine corpus endometrial carcinoma Uterine carcinosarcoma Kidney renal clear cell carcinoma Acute myeloid leukemia Adrenocortical carcinoma
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):939-954
oa_package/59/3a/PMC10788725.tar.gz
PMC10788726
0
Introduction Colorectal cancer (CRC) is a malignant neoplasm of the colon or rectum and the third most common cancer worldwide, with an estimated 1,880,725 new cases worldwide in 2020 and 915,880 related deaths 1 . The liver is the most common site of distant metastasis of CRC 2 - 4 . Liver metastasis (LM) is difficult to manage and the main cause of death among patients with CRC. The 5-year survival rate of patients with unresectable LM was less than 5% 2 , 5 . About 15%-30% of patients with CRC already have LM at CRC diagnosis (i.e., synchronous), and 20%-50% will develop metachronous LM (MLM) after radical resection of the primary tumor 3 . The early identification of patients with CRC at a high risk of MLM is essential for targeted screening and individualized treatment. Therefore, it is meaningful to identify MLM high-risk features. Previous studies reported that some clinical and pathological risk factors are independent factors for LM of CRC, including age, preoperative serum carcinoembryonic antigen (CEA) levels, T and N stages, vascular invasion, histological grade, and KRAS mutations 6 - 11 , but there are no consensuses. In addition, effective imaging characteristics of the primary CRC for predicting MLM are lacking. Kim et al. 12 analyzed the computed tomography (CT) features of primary CRC, including the morphologic and enhancement characteristics; seven CT features were associated with poorly differentiated (PD) over well- or moderately differentiated (WD or MD) CRC. Eurboonyanun et al. 13 reported that BRAF-mutant CRC has specific imaging characteristics. Hence, the CT features of the primary CRC could helpfully provide effective baseline imaging characteristics for prognosis. In addition, the imaging characteristics of the primary CRC are often less affected and more stable 14 . Therefore, it can be hypothesized that some CT features of primary CRC tumors may be helpful in predicting MLM in patients with CRC. The present study aimed to establish a nomogram model based on clinicopathological and radiological features for predict ing MLM from CRC.
Methods Study design and patients This retrospective study included patients with CRC treated by surgical resection at Changshu No.1 People's Hospital and the Second Affiliated Hospital of Soochow University between January 2016 and December 2018. The inclusion criteria were 1) underwent CRC resection and 2) histopathologically confirmed colorectal adenocarcinoma. The exclusion criteria were 1) mucinous adenocarcinoma, 2) synchronous liver metastasis, 3) the first metastatic sites did not include the liver, 4) incomplete clinical and radiological data, 5) neoadjuvant therapy or 6) follow-up of less than 3 years. This study was approved by the Ethics Committee of Changshu No.1 People's Hospital (approval #(2020) LUN No. 012). The first author, Zhihua Lu, originally worked in the Changshu No.1 People's Hospital and was transferred to Dushu Lake Hospital Affiliated to Soochow University, after completing the project. The study was conducted in accordance with the Declaration of Helsinki. The requirement for individual informed consent was waived by the board. Data collection and definitions The pathological diagnosis of the included cases in this study was consistent with the AJCC Version 8 diagnostic criteria 15 . Although different definitions of MLM have been suggested 16 - 19 , the present study used diagnosis/surgery as the cut-off point between the 'synchronous' and 'metachronous' groups 19 . The clinical information and pathological data of each patient were collected from the electronic patient charts. The clinical information included the age, sex, preoperative serum CEA (normal CEA: 0-10 ng/ml), preoperative carbohydrate antigen 19-9 (CA19-9) (normal CA19-9: 0-37 U/ml), primary tumor site, and CT scan data of the chest, abdomen, and pelvis. The pathological information included T and N stages, histologic tumor grade, vascular invasion, perineural invasion, and tumor deposits. The following features were dichotomized into two categories: sex (male vs. female), age (median of the study population; ≤ 66 vs. > 66 years), preoperative CEA levels (upper limit of the normal range; ≤ 10 vs. >10 ng/ml), preoperative CA19-9 levels (upper limit of the normal range; ≤ 37 vs. > 37 μ/ml), vascular invasion (no vs. yes), perineural invasion (no vs. yes), tumor deposits (no vs. yes), maximal wall thickness (median of the study population; ≤ 15 vs. > 15 mm), tumor shape (according to the literature 12 , 13 ; thicken vs. polypoid or bulky), and colonic obstruction (no vs. yes). The other features were classified as multiple categories: T stage (T 1-2 , T 3 , and T 4 ), N stage (N 0 , N 1 , and N 2 ), differentiation grade (well, moderately, and poorly), enhancement pattern (homogeneous, heterogeneous ≤50%, and heterogeneous >50%), enhancement degree (higher attenuation than the liver, attenuation between the liver and muscle, and lower attenuation than the muscle), pericolic fat infiltration pattern (normal, hazy, linear, and nodular), maximal size of regional LN (< 5, 5-10, and > 10 mm). Postoperative follow-up The postoperative clinical follow-up was performed according to the Chinese guidelines 20 . The follow-up examinations were performed for 3 years, including physical examination, abdominal ultrasound, serum CEA, and CA19-9. For patients with stage II or III CRC, a contrast-enhanced CT scan of the chest, abdomen, and pelvis was performed once a year in the first 3-5 years and once every 1-2 years in the following years. For patients who were highly suspected of liver metastases on CT images but could not be diagnosed definitely, liver magnetic resonance imaging (MRI) was performed. Computed tomography features The analysis of the CT features was based on the methods by Kim et al. 12 and Eurboonyanun et al. 13 . The analysis included 1) maximal wall thickness, 2) shape of the tumor, 3) enhancement pattern of the tumor, 4) enhancement degree of the tumor, 5) colonic obstruction, 6) pericolic fat infiltration pattern, and 7) size of the regional lymph nodes (LNs). The maximal wall thickness was measured on images perpendicular to the long axis of the tumor. The tumor's shape was classified as intraluminal polypoid mass or bulky and wall thickening ( Figure S1 ). The enhancement pattern of the tumor was evaluated in the portal venous (PV) phase and classified as homogeneous vs. heterogeneous. The heterogeneous pattern was demonstrated as lower attenuation to the tumor due to cystic change, necrosis, and mucinous components in the tumor. According to the ratio of low attenuation area to the tumor, it was further divided into heterogeneous ≤ 50% vs. heterogeneous > 50% ( Figure S2 ). The enhancement degree of the tumor was evaluated in the PV phase and classified as higher attenuation than the liver, attenuation between the liver and muscle, and lower attenuation than the muscle; the region of interest (ROI) was drawn by selecting the largest layer of the solid part of the mass and sketching as much solid part as possible. The average CT value of the tumor was measured on the largest tumor image and compared with the hepatic parenchyma and muscle; the ROI was drawn by selecting the largest dimension of the solid portion of the mass and sketching the maximum solid portion measurement with obvious enhancement. Colonic obstruction was classified as yes or no according to the CT features. Pericolic fat infiltration pattern was classified as normal, hazy, linear, and nodular ( Figure S3 ). If the outer contour of the tumor-bearing colorectal segment was smooth and the mesentery adjacent to the tumor showed the same appearance as the adjacent intra-abdominal fat, then it was considered normal. If the outer contour of the tumor-bearing colorectal segment was smooth and the mesentery adjacent to the tumor showed ill-defined, slightly increased density, then it was considered hazy. If the outer layer of the tumor-bearing colorectal segment was coarse and the mesentery adjacent to the tumor showed a well-defined, linear configuration, then it was considered linear. If the outer contour of the tumor-bearing colorectal segment showed a well-defined nodular configuration and invaded into peritumoral mesentery, then it was considered nodular. The size of the regional LNs was classified as no visible LNs, < 5 mm, 5-10 mm, and > 10 mm according to the short-axis diameter. Regional LNs were defined as LNs located along the course of the major vessels supplying the tumor-bearing colorectum, along the vascular arcades of the marginal artery, and the mesocolic border of the colon 21 . Statistical analysis Statistical analysis was performed using SPSS 22.0 (IBM Corp., Armonk, NY, USA). The nomogram was plotted using the “rms” package in R version 3.4.1, and all ROC curves were drawn using MedCalc 18.0. All data were expressed as n (%). The categorical variables were analyzed using the chi-square test. The patient characteristic variables with statistical significance (P < 0.05) between the two groups were included in the multivariable logistic regression analysis to identify the independent risk factors for MLM. A predictive nomogram for MLM development was constructed based on the independent risk factors screened by multivariable logistic analysis. The predictive efficiency of the nomogram model was evaluated using the receiver operating characteristics (ROC) method, and the area under the curve (AUC) was calculated. Finally, the calibration curve of the nomogram was performed. In order to evaluate the nomogram in clinical application value, decision curve analysis (DCA) was used to calculate net benefit under the probability of each risk threshold.
Results This study included 161 patients with CRC [median age: 66 (range, 33-87) years] ( Figure 1 ); 59 developed MLM in a median of 12 (range, 2-52) months after surgery. Among 17 characteristics, 10 were significantly different between the MLM and non-MLM groups, including age ( P = 0.036), T stage ( P = 0.037), N stage (P < 0.001), vascular invasion ( P < 0.001), maximal wall thickness ( P = 0.015), enhancement pattern ( P = 0.005), tumor deposit ( P < 0.001), colonic obstruction ( P = 0.023), pericolic fat infiltration pattern ( P < 0.001), and maximal size of regional LNs ( P = 0.014) ( Table 1 ). Sex ( P = 0.714), preoperative CEA levels ( P =0.104), preoperative CA199 levels ( P = 0.563), perineural invasion ( P = 0.179), differentiation grade ( P = 0.070), tumor shape ( P = 0.735), and enhancement degree ( P = 0.526) were not associated with MLM. The multivariable logistic regression analysis showed that age >66 years (OR = 3.471, 95% CI: 1.272-9.473, P = 0.015), N2 stage (OR = 6.534, 95% CI: 1.456-29.317, P = 0.014), positive vascular invasion (OR = 2.995, 95% CI: 1.132-7.926, P = 0.027), positive tumor deposit (OR = 4.451, 95% CI: 1.153-17.179, P = 0.030), and linear (OR = 6.774, 95% CI: 1.306-35.135, P = 0.023) and nodal (OR = 8.762, 95% CI: 1.521-50.457, P = 0.015) pericolic fat infiltration patterns were independently associated with MLM ( Table 2 ). There was no collinearity among the five variables ( Figure S4 ). Based on the five factors independently associated with MLM, a nomogram was developed for predicting MLM of CRC ( Figure 2 ). The AUC of the ROC curve was 0.866 (95% CI: 0.803-0.914, P < 0.001) ( Figure 3 ), meaning that the nomogram model had good predictive efficiency. The nomogram had 88.1% sensitivity, 74.5% specificity, 79.5% accuracy, 73.2% positive predictive value, and 82.9% negative predictive value. In addition, the calibration curve of the nomogram showed a satisfactory agreement between predicted and actual probability ( Figure 4 ). Figure 5 shows the decision curve analysis. Of the 161 patients in this study, 118 had available postoperative treatment records at our hospital (43 did not receive adjuvant therapy or received it in other hospitals). Among the 118 patients, the final TNM staging was II, III, and IV in 48, 66, and four. Regarding adjuvant therapy, 83 received FOLFOX, and 35 received CapeOx for 3-6 months. Five patients with rectal cancer received postoperative adjuvant chemotherapy and pelvic radiation therapy.
Discussion The results showed that a nomogram prediction model based on age >66 years, N2 stage, positive vascular invasion, positive tumor deposit, and linear and nodal pericolic fat infiltration patterns might have favorable prediction performance for the progression of CRC to MLM. These findings may help clinicians identify patients with a high risk of MLM development after CRC resection and adjust the follow-up accordingly. The present study proposes a nomogram that can easily be used to predict MLM after CRC surgery based on readily available features. Previous studies focused on the clinical and pathological factors that could predict MLM in CRC patients 7 , 8 , 11 , but few studies included the CT imaging characteristics of the primary CRC to predict the development of MLM. Xiao et al. 22 constructed a model based on deep learning analysis of pathological images. None of these models included imaging features, while the present study did. Recent studies used radiomics to construct a model predicting MLM after CRC resection 23 - 26 , but radiomics relies heavily on the software and local imaging parameters used for scanning. The naked eye cannot observe most radiomics features, and the external validity is generally limited 27 . It is why it was decided to examine hard CT features in the present study. Age and CEA have been confirmed to be important clinical risk factors for developing MLM of CRC 7 , 8 , 11 . The present study demonstrated that age was independently associated with the development of MLM. Previous studies reported the association between preoperative serum CEA levels and LM, prognosis, and recurrence of patients with CRC 7 , 8 , 11 . Generally, increased serum CEA levels before surgery have been related to MLM of CRC 28 , 29 . Nevertheless, preoperative serum CEA was not associated with MLM in the univariable logistic regression analysis, which might be because serum CEA levels are influenced by several factors 28 , 29 , including tumor size, tumor CEA contents, CEA production rates, tumor location, and the rate of CEA elimination 29 - 31 , and their results contradict the suggestion that CEA levels increase with more advancing stages of CRC. Previous studies reported that pathological factors such as higher T stage, positive N stage, and positive vascular invasion were associated with MLM of CRC 7 - 11 . A higher T stage means that the tumor cells infiltrate the intestinal wall deeper, resulting in a higher probability of infiltration of blood vessels and lymphatic vessels and a higher risk of distant metastasis. In the multivariable logistic regression analysis, the N2 stage and positive vascular invasion were independently associated with MLM. In contrast, the T stage was associated with the univariable analysis but not the multivariable one. The limited sample size could explain these results. In this study, 23% of the patients with the T1-2 stage (n = 7), 47% of the patients with the T3 stage (n = 33), and 32% of the patients with the T4 stage (n = 19) developed MLM. Tumor deposits are defined as discrete tumor nodules without histologic evidence of a residual LN identified in the pericolic or perirectal tissue away from the leading edge of the tumor 32 . In the latest 7 th and 8 th editions of the American Joint Committee on Cancer (AJCC) TNM staging system, if there is a positive tumor deposit but no concurrent LN metastasis, the N stage should be categorized as the N1c stage 32 . Positive tumor deposits are associated with recurrence, metastasis, and poorer survival outcomes in CRC patients 33 - 35 . In the present study, positive tumor deposit was independently associated with the development of MLM by multivariable logistic regression analysis. In line with this result, Wu et al. 33 retrospectively analyzed two large independent cohorts of patients with CRC and showed that tumor deposits were an independent predictor of liver metastasis (OR: 4.662, CI:2.743-7.923). For CRC patients whose postoperative pathological results suggest the presence of positive tumor deposits, more rigorous follow-up might be needed. Current studies on predicting MLM of CRC based on imaging of liver parenchyma or primary tumor are rare. Beckers et al. 24 , 25 analyzed the CT images of the liver parenchyma of patients with CRC based on texture analysis and got limited results. Indeed, uniformity had the potential to predict LM during the first postoperative 6 months but not beyond 6 months 25 . In addition, there were no additional effects found for texture assessment on a segmental level 24 . Still, several factors may influence imaging measurement of the liver parenchyma 25 . On the other hand, the imaging characteristics of the primary CRC tumor are often less affected and more stable than liver parenchyma 14 . After multivariable logistic regression analysis, the pericolic fat infiltration pattern was considered the most important predictor in the nomogram model. The linear and nodular patterns also indicated a higher risk of MLM. The explanation might be that the linear and nodular patterns are associated with deeper infiltration, resulting in more vascular invasion and LN metastasis. Previous studies reported that pericolic fat infiltration was correlated with the pathological variables of CRC 12 , 36 , 37 . A study by Kim et al. 12 reported that PD colorectal adenocarcinoma demonstrated significantly more nodular pericolic fat infiltration than WD or MD. Sa et al. 36 analyzed the correlation between the T stage of CRC and nine CT imaging characteristics. Six CT variables, including pericolic fat infiltration, were positively correlated with the T stage. Zeina et al. 37 quantitatively analyzed the pericolic fat, including the maximal distance between tumor margins and normally appearing mesenteric fat and mean CT values of pericolic fat adjacent to the tumor. They found that the overall sensitivity, specificity, and accuracy of pericolic fat infiltration in detecting patients with ≥T3 stage were 95%, 20%, and 81.9%. Still, Ng et al. 38 reported that abnormal pericolic fat features were not a precise indicator of the extramuscular extension of the tumor. Therefore, the pericolic fat infiltration pattern of the primary CRC tumor, especially linear and nodular patterns, might help predict the development of MLM. There were some limitations in this study. Firstly, the sample size was limited. Secondly, all patients were followed up for at least 3 years, but the follow-up was relatively short (<5 years). Therefore, it is possible that a longer follow-up would yield more cases of MLM. Furthermore, patients with advanced CRC generally receive chemotherapy after surgery, and the development of MLM might be affected by chemotherapy. It will have to be considered in future larger studies. Thirdly, although the nomogram showed good predictive efficiency, this study only used internal data. External validity remains to be evaluated. Fourthly, there are some molecular tumor markers, such as KRAS, NRAS, and BRAF, that might have promising relevance between LM and their positive value 39 , 40 , but it was impossible to collect these data in the present study because those biomarkers were not performed in all cases during the study period. Fifthly, the CT features analyzed in this study were based on the features by Kim et al. 12 and Eurboonyanun et al. 13 , including basic morphological and enhancement characteristics, but they might not be comprehensive enough. Other detailed CT features of primary CRC should be examined in the future.
Conclusion This study established a nomogram model for predicting the risk of MLM development in patients with CRC based on clinical and pathological features (age, N stage, vascular invasion, and tumor deposits) and a radiological feature (pericolic fat infiltration pattern) with good calibration and good predictive efficiency. This model might help clinicians identify patients with a high risk of MLM development after CRC resection and adjust the follow-up accordingly.
† These authors contributed equally to this work. Competing Interests: The authors have declared that no competing interest exists. Objective: To establish a nomogram prediction model (based on clinicopathological and radiological features) for the development of metachronous liver metastasis (MLM) in patients with colorectal cancer (CRC). Methods: This retrospective study included patients with CRC who underwent surgery at Changshu No.1 People's Hospital and the Second Affiliated Hospital of Soochow University between January 2016 and December 2018. The clinical, pathological, and radiological features of each patient were investigated. Risk factors for MLM were identified by univariable and multivariable analyses. The predictive nomogram for MLM development was constructed. The predictive performance of the nomogram was estimated by the receiver operating characteristics curve, calibration curve, and decision curve analysis. Results: This study included 161 patients with CRC [median age: 66 (range, 33-87) years]. Fifty-nine developed MLM after a median of 12 (range, 2-52) months after surgery. The multivariable logistic regression analysis showed that age >66 years (OR=3.471, 95% CI: 1.272-9.473, P =0.015), N2 stage (OR=6.534, 95% CI: 1.456-29.317, P =0.014), positive vascular invasion (OR=2.995, 95% CI: 1.132-7.926, P =0.027), positive tumor deposit (OR=4.451, 95% CI: 1.153-17.179, P =0.030), and linear (OR=6.774, 95% CI: 1.306-35.135, P =0.023) and nodal pericolic fat infiltration patterns (OR=8.762, 95% CI: 1.521-50.457, P =0.015) were independently associated with MLM. These five factors were used to create a nomogram. The area under the receiver operating characteristics curve of the nomogram was 0.866 (95% CI: 0.803-0.914), indicating favorable prediction performance. The calibration curve of the nomogram showed a satisfactory agreement between the predicted and actual probabilities. Conclusions: A nomogram prediction model based on five clinicopathological and radiological features might have favorable prediction performance for MLM in patients who underwent surgery for CRC. Hence, the present study proposes a nomogram that can easily be used to predict MLM after CRC surgery based on readily available features.
Supplementary Material
Funding This work was supported by the Suzhou Clinical Special Disease Diagnosis and Treatment Program (LCZX202346), Suzhou GuSu Medical Talent Project (GSWS2019077, GSWS2020108 and GSWS2022100), Suzhou Science and Technology Development Program (SYS2020058), and Changshu Science and Technology Development Program (CS202029). Ethics approval and consent to participate This study was approved by the Ethics Committee of Changshu No.1 People's Hospital (approval #(2020) LUN No. 012). The first author, Zhihua Lu, originally worked in the Changshu No.1 People's Hospital and was transferred to Dushu Lake Hospital Affiliated to Soochow University, after completing the project. The study was conducted in accordance with the Declaration of Helsinki. The requirement for individual informed consent was waived by the board. Author contributions Zhihua Lu and Jinbing Sun carried out the studies, participated in collecting data, performed the statistical analysis, and drafted the manuscript. Mi Wang and Heng Jiang participated in collecting data. Guangqiang Chen and Weiguo Zhang participated in the acquisition, analysis, or interpretation of data. All authors read and approved the final manuscript.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):916-925
oa_package/71/de/PMC10788726.tar.gz
PMC10788727
0
Introduction Prostate cancer (PCa) is the second leading cause of human malignancies worldwide 1 with increasing mortality and morbidity over the past 20 years 2 . PCa remains a major medical challenge given that tumor resistance to radiotherapy, chemotherapy, or androgen deprivation therapy stimulates tumor metastasis and recurrence 3 . PCa is a complex disease with the involvement of deregulated oncogenes and tumor suppressor genes 1 . Therefore, elucidating the genetic mechanism of PCa is crucial to improve the efficiency of clinical treatment. N(6)-methyladenosine (m6A) is the most abundant reversible methylation modification in eukaryotic RNA 4 , 5 . m6A modification is thought to control the production and functions of noncoding RNAs including circular RNA (circRNA) and microRNA (miRNA) 6 , 7 . CircRNAs are eukaryotic co-valently closed endogenous biomolecules with tissue-specific and cell-specific expression patterns 8 , 9 . Studies have shown that circRNAs can promote or inhibit tumorigenesis in various cancers, including hepatocellular carcinoma 10 , breast cancer 11 , glioblastoma 12 , and bladder cancer 13 . So far, some circRNAs have been reported to be associated with PCa progression, such as circ_0086722 14 and circRNA SR-related and CTD-associated factor 8 15 . However, the function and underlying mechanism of circRNA family with sequence similarity 126, member A (circFAM126A) in PCa have not been thoroughly investigated. By interacting with DNA, miRNA, lncRNA or protein, circRNA acts as a regulatory factor of various gene expression at different levels and plays a regulatory role in various cell physiological and pathological processes 16 . miRNAs have been widely studied as key regulators in PCa 17 - 19 , and multiple miRNAs have been identified as biomarkers for cancer diagnosis, treatment, and prognosis 20 , 21 . However, the mechanism of miR-505-3p in PCa remains unclear. This study identified a novel circRNA, circFAM126A in the circMine dataset and explored the effect of circFAM126A on the biological characteristics of PCa cells. A hypothesis was put forward as follows: m6A-modified circFAM126A mediates miR-505-3p-targeted calnexin (CANX) to affect cholesterol synthesis and malignant progression in PCa.
Materials and Methods Bioinformatics analysis The GEO dataset (GSE113153) was analyzed via the Bioinformatics website circmine ( http://www.biomedical-web.com/circmine/home ) to assess differential circRNA in PCa. The gene screening criteria were Log2FC > 1, and the adjusted P value was < 0.05. Starbase website ( https://rnasysu.com/encori/ ) was used to predict targeted binding sites between circFAM126A and CANX with miR-505-3p. Clinical samples Clinical PCa tissue and normal tissue specimens were obtained from 46 newly diagnosed PCa patients who underwent prostatectomy in the First Affiliated Hospital of Shaoyang University from 2013 to 2018. These tumor specimens were microscopically evaluated by two experienced pathologists and classified into I + II stage and III + IV stage according to tumor node metastasis (TNM) stages (URL: www.cancerstaging.org/). No patient received preoperative anticancer therapy or chemoradiotherapy. The specimens were stored in liquid nitrogen. Survival data were collected for 5 consecutive years. The inclusion and exclusion criteria for patients were as follows. Inclusion criteria: (1) Male patients aged between 50 and 80 years; (2) Histopathologically confirmed PCa patients; (3) Well organ function, such as kidney function and liver function. Exclusion criteria: (1) other major medical conditions such as serious heart disease, uncontrolled high blood pressure, and diabetes complications; (2) History of other malignancies within the last five years; (3) Severe mental illness or cognitive impairment, inability to understand study procedures or follow study guidelines; (3) Drug administration that may interfere with the results of the study, or allergy to drugs used in the study. Cell lines and cell culture Human normal embryonic kidney cell line HEK293T, human normal prostate epithelial cell line RWPE1, and PCa cell lines (PC-3, DU145, LNCAP, VCAP, and 22RV1) were purchased from the National Collection of Authenticated Cell Cultures (Shanghai, China). HEK293T and VCAP cells were cultured in Dulbecco's modified Eagle medium (Gibco), RWPE-1 cells in keratinocyte serum-free medium (Gibco), and PC-3, DU145, LNCAP, and 22RV1 cells in Roswell Park Memorial Institute (RPMI)-1640 medium (Gibco). Fetal bovine serum (10%, Gibco), 100 U/ml penicillin, and 100 μg/ml streptomycin (Invitrogen) were supplementary to the above medium. The cells were cultured at 37°C in a humidified incubator with 95% air and 5% CO 2 22 . Transfection miR-505-3p mimic/inhibitor and its blank control (miR-NC), short hairpin RNAs targeting circFAM126A (sh-CircFAM126A#1 and sh-CircFAM126A#2) and its control (sh-NC) were purchased from GenePharma. CANX sequences were PCR-amplified and subcloned into pcDNA3.1 (Thermo Fisher Scientific) to obtain a CANX overexpression plasmid, named pcDNA-CANX. PC-3 and DU145 cells were transfected using Lipofectamine 2000 (Thermo Fisher Scientific) 23 . Cell counting kit (CCK)-8 assay Cell proliferation was assessed with CCK-8 (Dojindo, Kumamoto, Japan) Cells were cultured for 24, 48, and 72 h in 96-well plates, and the medium was replaced with 10 μl CCK-8 solution in each well. After 2 h, optical density values were read at 450 nm on a VarioskanTM LUX microplate reader (Thermo Fisher Scientific). Colony formation test Cells were seeded into 6-well plates (700 cells/well) and cultured in an incubator containing 5% CO 2 at 37°C for 14 d. The colonies were fixed with 4% paraformaldehyde (Beyotime) for 10 min and stained with 0.1% crystal violet solution (Beyotime) for 5 min. Colonies containing at least 50 cells were counted 24 . 5-ethynyl-2'-deoxyuridine (EdU) assay The proliferation of PCa cells was detected by EdU staining proliferation kit (Abcam, USA). Cells were incubated with EdU solution for 24 h and added with the fixative for 15 min. Then, cells were added with a permeability buffer for 15 min, marked with fluorescently-labeled EdU solution for 30 min, and observed under a fluorescence microscope 25 . EdU-positive cells were counted using the ImageJ software (National Institutes of Health). Cell migration and invasion assays Cells were suspended in a serum-free RPMI-1640 medium. The suspended cells were seeded at a density of 10 5 cells/well to the filter (with or without Matrigel) in the upper chamber of 8.0-μm transwell plate (Corning). After 72-96 h, migrating and invading cells on the lower surface of the filter were fixed in methanol and stained with 0.05% crystal violet solution. The stained cells were counted in four randomly selected fields of view using Olympus CKX52 microscope (magnification, × 200). Flow cytometry Apoptosis was detected by Annexin apoptosis assay kit (Sigma) according to the prescribed procedure. At 48-h post-transfection, cells were resuspended in a binding buffer, added with 5 μL Annexin V-fluorescein isothiocyanate and 5 μL propidium iodide in the dark for 5 min, and analyzed by tune NxT (Thermo Fisher Scientific) 26 . Measurement of triglyceride (TG) and cholesterol levels Cells were lysed using the kit (Nanjing Jiancheng), in which TG and cholesterol levels were quantified with quantification kits, respectively 27 . RNA extraction, RNase R treatment, polymerase chain reaction (PCR) detection Total RNA extracts were collected from cells and tissues using RNeasy Mini Kit (QIAGEN, Germany). complementary DNA (cDNA) was synthesized using random primer and PrimeScript RT Master Mix reverse transcription kits (Takara, Dalian, China) or miRNA reverse transcription PCR kits (Ribo-Bio). To isolate genomic DNA, QIAamp DNA Mini kits (QIAGEN) was used. PCR analysis was done with SYBR Premix Ex Taq TM kits (Takara). circRNA and miRNA were normalized to glyceraldehyde-3-phosphate dehydrogenase (GAPDH) or U6 levels, respectively. Data were analyzed in the StepOnePlus real-time PCR system (Applied Biosystems). Bulge-loop miRNA qPCR primers (RiboBio) were seen in Table 1 28 . Western blot Proteins from cells and tissues were collected using radioimmunoprecipitation assay lysis buffer (Thermo Fisher Scientific) and analyzed for protein concentration using the bicinchoninic acid protein assay kit (Pierce, USA). Then, proteins (30 μg) were electrophoresed on sodium dodecyl sulfate polyacrylamide gel, electroblotted onto polyvinylidene fluoride membranes, and blocked with 5% nonfat milk for 2 h in tris-buffered saline Tween-20. Primary antibodies against CANX (1:1000, ab22595, Abcam), insulin-like growth factor 2 mRNA-binding protein 1 (IGF2BP1, 1:1000, ab82968, Abcam), Vascular endothelial growth factor (VEGF, 1:1000, sc-7269, Santa Cruz Biotechnology), programmed death-ligand 1 (PD-L1, 1:1000, 13684, Cell Signaling Technology), and GAPDH (1:1000; ab37168, Abcam) were incubated with the membrane at 4°C overnight, and horseradish peroxidase-linked secondary antibody (1:500; SC-2054; Santa Cruz Biotechnology) was added for 2 h at room temperature. Protein bands were visualized using an enhanced chemiluminescence detection kit (Millipore) 29 . RNase R test Total RNA from PC-3 and DU145 cells was treated with RNase R (Sigma) for 15 min at 37°C and then purified with phenol-chloroform (Sigma) to measure circular or linear FAM126A by reverse transcription quantitative PCR (RT-qPCR). Actinomycin D test Actinomycin D (5 μg/mL) was added to the culture medium of cells and incubated for 0, 4, 8, 16, and 24 h. The stability of mRNA was analyzed by PCR 30 . Methylated RNA immunoprecipitation (MeRIP)-qPCR m6A modification was measured by Magna MeRIPTM m6A Kit (Millipore). Briefly, 150 μg RNA extracted from pretreated cells were prepared to fragments (≤ 100 nt) and immunoprecipitated with magnetic beads coated with 10 μg anti-m6A (Millipore) or anti-mouse immunoglobulin G (Millipore). m6A enrichment was normalized 31 . Fluorescence in situ hybridization (FISH) The subcellular location ofcircFAM126A1 was determined by FISH. Hybridization was done using RNA FISH kit and three 5'-cy3-labeled probes targeting the splicing site of circFAM126A1 (GenePharma), during which the probe mixture was concentrated at 8 μmol/L. Photography was done under a fluorescence inverted microscope (Olympus) 32 . RNA-binding protein immunoprecipitation (RIP) Cells after 48-h transfection were lysed in the RIP lysis buffer on ice for 30 min and centrifuged. The supernatant was incubated with antibodies and 30 μl Protein-A/G agarose beads (Roche) overnight. The immune complexes were centrifuged, washed with cleaning buffer 6 times, and analyzed by RT-qPCR. RNA pull-down assay Cell lysates were incubated with streptavidin (Invitrogen)-coated magnetic beads to pull down biotin-conjugated RNA complexes. The enrichment of circFAM126A1 was assessed by RT-qPCR analysis. Bound proteins were eluted from the packaged beads and analyzed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis. Xenograft model PC-3 cells after digestion and centrifugation were resuspended in phosphate buffered saline (PBS) to 1 × 10 6 /mL and subcutaneously injected into BALB/c nude mice (n = 6) at a dose of 0.1 mL. The same dose of PBS served as a blank control. Tumor nodules that appeared at the injection site were measured with a vernier caliper every week. The longest diameter (a) and longitudinal diameter (b) were recorded to calculate tumor volume (a × b 2 /2). After 4 weeks, tumors were excised from euthanized mice (3% pentobarbital sodium, intraperitoneal injection, 160 mg/kg) and weighed 33 . Immunohistochemistry Xenografts were cryopreserved and sectioned to 5 μm by Tek O.C.T (Fisher Scientific). Cleaved caspase-3 (9661, Cell Signaling Technology) and Ki-67 (ab15580, Abcam) were added to the sections overnight at 4°C. Subsequently, the sections were added with horseradish peroxidase-labeled secondary antibody and incubated at 37°C for 30 min. Streptavidin-biotin complex was added at 37°C for 20 min, and the sections were developed with diaminobenzidine. Sections were stained with hematoxylin, dehydrated and permeated with xylene, and sealed with neutral balsam. Staining intensity and staining positive cells were analyzed using the ImageJ software (National Institutes of Health) 34 . Oil red O staining Oil Red O solution (6 mL, Sigma-Aldrich) and distilled water (4 mL) were mixed. After 10 min, the mixture was dropped on the xenografts for 5-10 min, and excessive staining buffer was removed with 60% Oil Red O and isopropanol solution. The tumors were rinsed with distilled water and counterstained with hematoxylin (Servicebio, Wuhan, China) 35 . Lung metastases PC-3 cells (2 × 10 6 ) that were transfected with LV-sh-circFAM126A or LV-sh-NC for 48 h were suspended in 0.1 ml PBS and injected into the caudal vein of mice. Then, mice were isoflurane-anesthetized and given an intraperitoneal injection with D-luciferin (15 mg/ml PBS) at 150 mg/kg. After 7 weeks, D-luciferin injection was repeated once again. After 10 min, bioluminescence signals in mice were captured with a high-sensitivity camera in a IVIS200 chamber (Xenogen) and quantified by live Image software (Xenogen) 36 . Statistical analysis SPSS 21.0 and Prism 6.0 were needed to analyze data. Measurement data (mean ± standard deviation) were compared by t test and one-way ANOVA. For correlation analysis, Pearson method was utilized. All tests were two-sided and p < 0.05 was considered statistically significant.
Results circFAM126A expression pattern in PCa The circRNA transcripts of 3 PCa tissue samples with Gleason score < 6 and 3 PCa tissue samples with Gleason score > 8 (circmine ID: HSACM000016) were obtained from circMine ( http://www.biomedical-web.com ) (Fig. 1 A). In total, 862 circRNAs were upregulated and 497 circRNAs were downregulated (Fig. 1 B). Since hsa_circ_0001971 expression exhibited 3.7754 log2 fold change, it was selected as the circRNA of interest. To further validate the RNA-seq data, hsa_circ_0001971 in 46 paired PCa tissues and normal tissues was measured, and it was found that hsa_circ_0001971 expression was elevated in PCa tissues (Fig. 1 C). According to the circBase information database ( http://www.circbase.org/ ), the mature sequence of hsa_circ_0001971 after splicing is located at chr12:70193988-70195501 with a length of 1513 bp, which contains exons 3-7 from FAM126A mRNA (Fig. 1 D). The ring structure and subcellular localization of circFAM126A were then evaluated by cell experiments. PCR analysis validated that circFAM126A could be amplified by different primers amplified from random hexamer reverse-transcribed cDNA, whereas not by gDNA primers (Fig. 1 E). Resistance of circFAM126A to RNase R digestion confirmed that circFAM126A has a closed-loop structure (Fig. 1 F). Actinomycin D treatment showed a more stable half-life of circFAM126A transcripts over 24 h compared to FAM126A (Fig. 1 G). According to nuclear and cytoplasmic separation experiment results, circFAM126A was distributed in both the nucleus and cytoplasm, and the distribution in the cytoplasm was higher than that in the nucleus (Fig. 1 H). This finding was further verified by FISH (Fig. 1 I). These data confirm that circFAM126A is abnormally high expressed in PCa as a circRNA, which may be involved in regulating PCa occurrence and development. circFAM126A is associated with poor prognosis in PCa Taking circFAM126A medium expression as a cutoff value, PCa patients were allocated to the circFAM126A high-expression group and circFAM126A low-expression group. Analysis of clinicopathological features found that circFAM126A high expression was positively correlated with tumor size, TNM stage, and microvascular invasion (Table 2 ). Receiver operating characteristic curve analysis calculated that the area under the curve of circFAM126A to differentiate PCa tissue from normal tissue was 0.8625 (Fig. 2 A). Kaplan Meier survival curves showed that patients with higher circFAM126A expression had shorter overall survival (Fig. 2 B). RT-qPCR measured that circFAM126A was significantly upregulated in PCa cells compared to the normal prostate epithelial cell line RWPE1 (Fig. 2 C). PC-3 and DU145 cells with the most significant expression were selected for follow-up experiments. circFAM126A promotes PCa in vitro shRNAs were produced to silence circFAM126A without affecting FAM126A mRNA levels in PC-3 and DU145 cells (Fig. 3 A). Data collected from CCK-8, colony formation assay, and EdU assay indicated that sh-circFAM126A suppressed PC-3 and DU145 cell proliferation (Fig. 3 B-D). Then, flow cytometry detected that PC-3 and DU145 cell apoptosis was enhanced after knocking down circFAM126A (Fig. 3 E). Next, Transwell assays found that silencing circFAM126A reduced the invasion and migration ability of PC-3 and DU145 cells (Fig. 3 F, G). The importance of abnormal cholesterol metabolism in tumor cell physiology has been emphasized. TG and cholesterol levels were reduced in PC-3 and DU145 cells after inhibiting circFAM126A (Fig. 3 H, I). LUR1 (also called C12orf49) is a novel regulator of lipid production, which regulates the mRNA processing of sterol regulatory element-binding proteins (SREBPs) to up-regulate SREBPs and accelerate cholesterol synthesis. LUR1 and SREBPs mRNA levels were decreased after inhibition of circFAM126A (Fig. 3 J, K). These results suggest that inhibition of circFAM126A inhibits cholesterol synthesis and malignant progression of PCa cells. m6A modification of circFAM126A improves transcriptome stability In order to explore m6A modification of circFAM126A, m6A site in circFAM126A was predicted using the online bioinformatics tool m6A Avar ( http://m6avar.renlab.org/ ) (Fig. 4 A). MazF was used to provide nucleotide resolution quantification of m6A methylation sites. MazF toxin is an aca sequence-specific endorbonuclease that is sensitive to the m6A site and represents the first m6A sensitive RNA lyase. m6A levels were higher in PC-3 and DU145 cells than in RWPE1 cells (Fig. 4 B). Then, Mett3, Mett14, Fat mass and obesity associated (Fto) gene, IGF2BP1, nsulin-like growth factor 2 mRNA-binding protein 2 (IGF2BP2), and nsulin-like growth factor 2 mRNA-binding protein 3 (IGF2BP3) in PC-3 and DU145 cells were examined, and it was found that IGF2BP1 was mostly expressed, suggesting that IGF2BP1 protein is the m6a-binding protein of circFAM126A (Fig. 4 C). In clinical samples, Western blot and RT-qPCR measured that IGF2BP1 was highly expressed in PCa tissues (Fig. 4 D, E). To further investigate the interaction of circFAM126A with IGF2BP1, RNA pull-down was conducted. It was found that circFAM126A probe was enriched for circFAM126A and IGF2BP1 (Fig. 4 F), and significant enrichment of circFAM126A was observed in IGF2BP1 immunoprecipitants by anti-IGF2BP1 (Fig. 4 G). Furthermore, knockdown of IGF2BP1 resulted in decreased expression of circFAM126A (Fig. 4 H). Correlation analysis also showed that circFAM126A was positively correlated with IGF2BP1 (Fig. 4 I). Taken together, the m6A reading protein IGF2BP1 can bind to circFAM126A in vitro , and that m6A modification enhances the transcriptome stability of circFAM126A, which may be part of the reason for the significant up-regulation of circFAM126A in PCa. circFAM126A as a sponge for miR-505-3p Considering the cytoplasmic distribution of circFAM126A, it was speculated that circFAM126A may function by targeting miRNAs. Potential targets of circFAM126A were predicted using miRNA target prediction tools such as miRDB, miRanda and circBase. Among 246 candidate miRNAs overlapping in the 3 databases, the top 10 miRNAs were selected for further analysis (Fig. 5 A). To validate our predictions, biotinylated circFAM126A probes were designed and the pull-down efficiency was confirmed in PCa cells overexpressing circFAM126A (Fig. 5 B). circFAM126A probe pulled down miR-505-3p in PC-3 and DU145 cells (Fig. 5 C). The binding site between circFAM126A and miR-505-3p waspredicted by Starbase (Fig. 5 D). To further verify the direct binding of miR-505-3p to circFAM126A, RIP experiments were conducted. It was found that circFAM126A was preferentially enriched in ago2-RIP (Fig. 5 E). In addition, Ago2-RIP experiment also reported that circFAM126A was enriched in the miR-505-3p mimic group (Fig. 5 F). Subsequently, dual-luciferase reporter experiments also manifested that overexpression of miR-505-3p reduced the luciferase activity of the wild-type circFAM126A reporter gene, but not that of the mutant circFAM126A reporter (Fig. 5 G). In addition, biotin-labeled miRNA pull-down experiments proved that circFAM126A was elevated in PCa cells transfected with biotin-labeled miR-505-3p (Fig. 5 H). At the same time, circFAM126A silencing increased miR-505-3p levels (Fig. 5 I). In addition, downregulated miR-505-3p was also detected in tumor tissues of 46 PCa patients (Fig. 5 J) and was negatively correlated with circFAM126A (Fig. 5 K). These results suggest that circFAM126A directly sponges miR-505-3p. miR-505-3p inhibits the malignant progression of PCa cells Next, the role of miR-505-3p in PCa was further studied. PC-3 and DU145 cells were transfected with miR-505-3p mimic and inhibitor. miR-505-3p mimic elevated miR-505-3p expression, and miR-505-3p inhibitor reduced miR-505-3p expression (Fig. 6 A). Functional experiments found that increasing miR-505-3p obstructed PC-3 and DU145 cell proliferation (Fig. 6 B-D) and promoted apoptosis (Fig. 6 E). In addition, up-regulation of miR-505-3p significantly reduced the invasion and migration capacity of cells (Fig. 6 F, G). TG and cholesterol levels were also reduced (Fig. 6 H, I). RT-qPCR analysis found that LUR1 and SREBPs mRNA expression were suppressed in PC-3 and DU145 cells overexpressing miR-505-3p (Fig. 6 J, K). Low expression of miR-505-3p had the opposite results (Fig. 6 B-K). These results indicated that overexpression of miR-505-3p inhibited the malignant progression of PCa cells. circFAM126A sponges miR-505-3p to upregulate CANX Target genes of miR-505-3p were predicted using PITA, MicroCosm, TargetScan, PicTar and miRanda, the intersection of these databases indicated that CANX was the most likely miR-505-3p target gene (Fig. 7 A). RT-qPCR and Western Blot showed that regulating miR-505-3p could affect CANX expression (Fig. 7 B, C). The assay of luciferase reporter gene showed that in PCa cells transfected with wild-type CANX 3'-UTR reporter, overexpression of miR-505-3p could inhibit the luciferase activity of wild-type CANX 3'-UTR reporter but had no influence on that of corresponding mutant reporter (Fig. 7 D). Binding sites between CANX and miR-505-3p were predicted (Fig. 7 E). In addition, biotin-labeled miRNA pull-down experiments confirmed that CANX was a target gene of miR-505-3p (Fig. 7 F). RT-qPCR and Western Blot showed that CANX was highly expressed in PCa tissues (Fig. 7 G, H). CANX expression was in a positive correlation with circFAM126A expression and in a negative correlation with miR-505-3p expression (Fig. 7 I, J). Moreover, silencing circFAM126A reduced CANX levels (Fig. 7 K, L). These findings suggest the presence of the circFAM126A-miR-505-3p-CANX regulatory axis. CANX overexpression re-activates cancer malignancy after inhibition of circFAM126A or overexpression of miR-505-3p The relationship between circFAM126A/miR-505-3p/CANX was then evaluated by functional rescue experiments. Transfection of pcDNA 3.1-CANX in cells that knocked down circFMA126A or overexpressed miR-505-3p promoted CANX expression (Fig. 8 A). CCK-8, colony formation assay, and EdU assay showed that the inhibitory effect of circFMA126A knockdown or miR-505-3p overexpression on cell proliferation was suppressed by overexpression of CANX (Fig. 8 B-D). Flow cytometry showed that the promoting effect of circFMA126A knockdown or miR-505-3p overexpression on cancer apoptosis rate was reversed by overexpression of CANX (Fig. 8 E). Transwell experiments showed that transfection of pcDNA 3.1-CANX in cells that knocked down circFMA126A or overexpressed miR-505-3p restored the invasion and migration ability of cells (Fig. 8 F, G). Furthermore, the inhibitory effects of circFMA126A knockdown or overexpression of miR-505-3p on TG and cholesterol were reversed by overexpression of CANX (Fig. 8 H, I). RT-qPCR experiments showed that the inhibitory effect of circFMA126A knockout or overexpression of miR-505-3p on mRNA expression of LUR1 and SREBPs was blocked by overexpression of CANX (Fig. 8 J, K). These results indicate that circFAM126A and miR-505-3p affect the malignant behavior of PCa by targeting CANX. Silencing circFAM126A inhibits PCa xenograft tumor formation By subcutaneous injection of PCa cells transfected with pre-constructed LV-sh-circFAM126A and LV-sh-NC into nude mice (Fig. 9 A), the tumor-suppressing effect of circFAM126A downregulation was further confirmed. circFAM126A knockdown significantly inhibited tumor growth, as evidenced by the relatively small tumor size and weight (Fig. 9 B, C). The proliferation and apoptosis of xenografts were analyzed by immunohistochemical staining of proliferation marker Ki-67 and apoptosis marker caspase-3. Immunohistochemical staining indicated that the number of Ki-67-positive cells decreased and that of caspase-3-positive cells increased in circFAM126A-low-expressing tumor xenografts (Fig. 9 D). In addition, knockdown of circFAM126A significantly inhibited VEGF and PD-L1 protein expression in tumors, suggesting that circFMA126A has a positive effect on tumor angiogenesis and immune escape (Fig. 9 E). Oil red O staining showed that lipid content was reduced after inhibition of CircFAM126A (Fig. 9 F). PCa cells transfected with LV-Sh-circFAM126A or control were injected into the tail vein of nude mice to measure the effect of circFAM126A on tumor metastasis in vivo . Bioluminescence imaging showed that inhibition of circFAM126A prevented PCa cells from metastasis to the lung (Fig. 9 G). After 7 weeks, anatomical observations revealed a marked reduction in the number of metastatic lymph nodes in nude mice injected with cells transfected with LV-sh-circFAM126A (Fig. 9 H). These results highlight the role of CircFAM126A in PCa and suggest that inhibition of CircFAM126A inhibits tumor formation in vivo .
Discussion Various circRNAs are differentially expressed in various diseases, especially cancer. Its stability, abundance, conservation and spatiotemporal specificity make it a hot spot of biomedical research in recent years. This report identified differentially expressed circRNAs in PC and focused on circFAM126A in PCa. Compared with linear FAM126A, circFAM126A was stably expressed in PCa cells and was mainly localized to the cytoplasm, suggesting its role in post-transcriptional gene regulation. The expression and functional differences of circFAM126A may be related to the tissue specificity, suggesting that circFAM126A has the potential to serve as a promising prognostic biomarker to guide the development of personalized therapy for PCa patients. CircRNAs are highly dysregulated in many types of cancer and exhibit highly tissue- and disease-specific 37 . This study examined that circFAM126A was elevated in PCa tissues and PCa cell lines. Loss-of-function experiments suggest that circFAM126A was associated with tumor progression and circFAM126A knockdown reduced PCa cell malignancy. Abnormal cholesterol metabolism in tumor cell physiology has been appreciated 38 . LUR1 is a novel lipid production regulator that upregulates SREBPs and accelerates cholesterol synthesis 39 . This report found that TG and cholesterol levels were reduced, as well as mRNA expression of LUR1 and SREBPs in PC-3 and DU145 cells after circFAM126A inhibition. These tumor growth observations were validated in a mouse xenograft model and in vivo metastasis experiments. The biogenesis mechanism of circRNAs is a very complex process 40 . m6A has been confirmed to be an abundant transcription-related modification and is involved in the regulation of circRNAs, such as m6a-mediated upregulation of circMDK to promote tumorigenesis 41 . m6A modifies circHPS5 to promote liver cancer progression 42 . This study predicted m6A sites in circFAM126A. m6A RNA methylation is the most common internal mRNA modification in mammals, mediated by m6A methyltransferases, demethylases, or m6A-binding proteins 43 . The study evidenced that IGF2BP1 protein is the m6a-binding protein of circFAM126A. In addition, m6A reader protein IGF2BP1 can bind circFAM126A in vitro, and m6A modification enhances the transcriptome stability of circFAM126A, which may be part of the reason why circFAM126A is upregulated in PCa. CircRNAs exert their functions through a variety of biological processes, such as miRNA sponges 9 . The ceRNA mechanism suggests that circRNAs competitively bind miRNAs to relieve the inhibition of miRNA-targeted genes 44 . This study predicted the potential target of circFAM126A, miR-505-3p and further confirmed that miR-505-3p inhibited the growth, TG, and cholesterol in PCa. CANX is a chaperone protein involved in the folding and assembly of major histocompatibility class-I (MHC-I) molecules on the endoplasmic reticulum 45 , 46 . Aberrant expression of CANX prevents successful assembly of MHC-I, processing of antigenic peptides and presentation on the surface of tumor cells, thus potentially leading to evasion of immune surveillance 47 . This study found that CANX was upregulated in PCa and acted as a target gene of miR-505-3p, and overexpression of CANX decreased the suppressive effects of circFAM126A and overexpression of miR-505-3p on cancer malignancy. In conclusion, circFAM126A has an important role in PCa tumorigenesis and metastasis. Mechanistically, circFAM126A acts as a sponge for miR-505-3p and regulates FAM126A expression. Our study suggests that circFAM126A may be a potential biomarker and therapeutic target for PCa, enriching the research on the pathogenesis of PCa and providing a theoretical basis for in-depth exploration of the functions of circRNAs in PCa.
# These authors contributed equally to this work. Competing Interests: The authors have declared that no competing interest exists. Prostate cancer (PCa) is the most commonly diagnosed malignancy in men. In tumor biology, n6-methyladenosine (m6A) can mediate the production of circular RNAs (circRNAs). This study focused on the mechanism of m6A-modified circRNA family with sequence similarity 126, member A (FAM126A) in PCa. Cell counting kit-8 assay, colony formation assay, 5-ethynyl-2'-deoxyuridine assay, transwell assay, and xenograft mouse models were applied to study the role of circFAM126A in PCa cell growth and tumor metastasis, and cellular triglyceride and cholesterol levels were measured to assess cholesterol synthesis. RNA immunoprecipitation, RNA pull-down, luciferase reporter gene assay, and western blot were adopted to explore the underlying molecular mechanism. Data showed that circFAM126A was upregulated in PCa and promoted PCa progression in vitro . m6A modification of circFAM126A enhanced transcriptional stability. CircFAM126A targeted microRNA (miR)-505-3p to mediate calnexin (CANX). Up-regulating miR-505-3p or inhibiting CANX suppressed cholesterol synthesis and malignant progression in PCa cells. Overexpressing CANX suppressed the inhibitory effect of circFAM126A silencing or miR-505-3p upregulation on PCa cells. Our current findings provide a new therapeutic strategy for the treatment of PCa.
Funding Natural Science Foundation of Hunan Province (2021JJ70047). Ethics approval and consent to participate All procedures performed in this study involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. All subjects were approved by The First Affiliated Hospital of Shaoyang University. Consent for publication Written informed consent for publication was obtained from all participants. Data availability Data is available from the corresponding author on request. Author contributions Lin Luo and Ping Li designed the research study. QingZhi Xie and KangNing Wang performed the research. FuQiang Qin, DunMing Liao and Ke Zeng provided help and advice on the experiments. KangNing Wang, QingZhi Xie and YunChou Wu analyzed the data. Lin Luo and Ping Li wrote the manuscript. All authors contributed to editorial changes in the manuscript. All authors read and approved the final manuscript.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):966-980
oa_package/b5/5e/PMC10788727.tar.gz
PMC10788728
0
Introduction Obesity is a chronic, relapsing, progressive disease that has been characterized as a pandemic over the last few decades by the WHO 1 , 2 . This surge in the prevalence of obesity, in turn, has been followed by an exponential increase in metabolic bariatric surgery (MBS), which has been recognized as the most effective intervention for treating obesity and its sequelae with a favorable safety profile 3 - 5 . Among the long-term benefits of MBS, cancer prevention has presumably the most striking impact on shaping the opinion of healthcare providers and patients alike. Cancer is a multifactorial and multistage disease process, as much as obesity is. The first systematic and comprehensive attempt to correlate obesity (or “body fatness”) with cancer was published in 2016 by the International Agency for Research on Cancer (IARC) and has been updated recently 6 , 7 . In this context, there has been sufficient evidence that obesity significantly increases the incidence of at least 13 types of cancer, including esophageal adenocarcinoma; cancer of the gastric cardia; colorectal, liver, gallbladder, and pancreatic cancers; breast cancer in postmenopausal women; endometrial and ovarian cancer; renal cell carcinoma, meningioma, and multiple myeloma, with the respective relative risks (RR) ranging from 1.1 to 7.1 (95% CI 1.0-8.1), depending on cancer type ( Table 1 ) 7 . This epidemiological correlation has been supported by Mendelian randomization studies, which have showcased the causal effects of visceral adiposity on the manifestation of neoplasia, 8 , 9 further confirming the dictum that obesity is not a mere risk factor for developing cancer but a pivotal player of its etiopathogenetic continuum 10 . On mechanistic grounds, several processes have been proposed and investigated: the interplay between adipocytes and inflammatory cells via cytokines and reactive oxygen species; the endocrine properties of adipocytes, through both peripheral aromatization of androgens and direct increase of gastrointestinal hormones such as leptin, resistin, and visfatin; the role of the microbiome and its alterations in the context of obesity; the anabolic effects of insulin resistance and diabetes on the survival of cancer cells; genetic predisposition and epigenetic changes to the genome that are accelerated in the context of obesity; etc. 11 , 12 . In all these processes, the role of adipose tissue and its alterations is central: from inflammation to fibrosis and extracellular matrix remodeling; to an altered microenvironment that affects lipid metabolism and induces insulin resistance; to microbiotal dysbiosis and disrupted immune function; and to imbalanced sex hormone and adipokine secretion 13 , 14 . The role of MBS in preventing cancer has recently started to be documented in large-scale population studies with long-term follow-up periods. Among the relevant seminal studies, there are a few worth mentioning. For instance, Schauer et al. retrospectively analyzed the data of 22,198 post-bariatric patients from multiple centers and matched them to 66,427 non-operated individuals over a period of 7 years 15 . In the post-bariatric cohort, they found a 33% lower hazard of developing any cancer [hazard ratio (HR) 0.67, 95% CI 0.60-0.74] and an even lower risk of developing obesity-related cancer (HR 0.59, 95% CI 0.51-0.69). Similarly, Aminian et al. retrospectively analyzed a cohort of 30,318 individuals (among whom 5,053 had undergone MBS). They found that those who had undergone MBS featured a significantly lower incidence of obesity-associated cancer (HR 0.68, 95% CI 0.53-0.87), while cancer-related mortality was almost half after MBS (HR 0.52, 95% CI 0.31-0.88) as compared to nonsurgical care 16 . In a recent report, Khalid et al. demonstrated that patients who were eligible for (but eventually did not undergo) bariatric surgery had significantly ( p <0.0005) higher risk for developing any cancer type (4.61%) as compared to patients who were submitted to laparoscopic sleeve gastrectomy (LSG, 3.47%) or Roux-en-Y gastric bypass (RYGB, 3.62%) 17 . Several pertinent meta-analyses have also been published ( Table 2 ) 18 - 20 . In the most recent one (2023), Wilson et al. analyzed 32 primary studies and found a significant reduction in the overall incidence of cancer (RR 0.62, 95% CI 0.46-0.84), obesity-associated cancer incidence (RR 0.59, 95% CI 0.39-0.90), and cancer-related mortality (RR 0.51, 95% CI 0.42-0.62) 20 . Even more compelling, though, is the evidence that stems from the seminal prospective SOS (Swedish Obese Subjects) study. The initial seminal report in 2009 suggested that MBS has a protective role regarding carcinogenesis only for women, not for men 21 . The most recent relevant report from the SOS study regarding 701 patients with obesity and diabetes (MBS arm: 393 patients, conventional treatment: 308 patients) showed that, during a median follow-up of 21.3 years (IQR 17.6-24.8, maximum 30.7), the incidence rate of first-time cancer was 9.1/1000 person-years in the bariatric group versus 14.1/1000 person-years in the conventional group (adjusted HR 0.63, 95% CI 0.44-0.89) 22 . Interestingly, diabetes remission at 10 years was associated with reduced cancer incidence (adjusted HR 0.40, 95% CI 0.22-0.74), implicating a pivotal role of hyperinsulinemia and insulin resistance in the pathophysiology of obesity-related carcinogenesis 22 . Several studies have focused on the impact of MBS on cancer with regard to female sex. For example, Tsui et al. investigated the risk of developing female-specific cancers following MBS 23 . After matching 55,781 post-bariatric females with 247,102 women living with obesity, they found an overall incidence of female-specific cancers of 2.09% in the former (bariatric) group versus 2.69% in the latter (non-bariatric; p <0.0001), with a hazard ratio for female-specific cancers of 0.78 (95% CI 0.73-0.83) in the post-bariatric group. Additionally, Adams et al. published their retrospective population study on long-term cancer outcomes following MBS 24 . Their study spanned 37 years (1982-2019), investigated 21,837 post-MBS patients matched 1:1 by age, sex, and body mass index (BMI) with non-surgical individuals, and found a 25% overall decrease in the risk of developing cancer (HR 0.75, 95% CI 0.69-0.81). Interestingly, their outcomes were of relevance to the female population, as women demonstrated a reduced overall cancer incidence (HR 0.67, 95% CI 0.62-0.74), obesity-related cancer incidence (HR 0.59, 95% CI 0.52-0.66), and cancer mortality (HR 0.53, 95% CI 0.44-0.64), compared to men. A few years earlier, a meta-analysis of 7 relevant studies demonstrated a total RR of 0.41 (95% CI 0.31-0.56) for developing breast, ovarian, and endometrial cancer after MBS 25 . Besides, evidence from the SOS study also supports the notion that women are particularly favored by MBS, since they appear to have a reduced incidence of cancer in general (HR 0.58, 95% CI 0.44-0.77) and gynecologic cancer in particular (HR 0.68, 95% CI 0.52-0.86) 22 , 26 . In the present narrative review, we will focus on the potential benefit of MBS in preventing various types of gynecologic neoplasms, including breast, endometrial, and ovarian cancer, as it has been documented in recent publications. Simultaneously, we will present the pathophysiological grounds of the correlation of these cancers with obesity and potential implications in future therapeutic strategies.
Competing Interests: The authors have declared that no competing interest exists. Obesity and cancer represent two pandemics of current civilization, the progression of which has followed parallel trajectories. To time, thirteen types of malignancies have been recognized as obesity-related cancers, including breast (in postmenopausal women), endometrial, and ovarian cancer. Pathophysiologic mechanisms that connect the two entities include insulin resistance, adipokine imbalance, increased peripheral aromatization and estrogen levels, tissue hypoxia, and disrupted immunity in the cellular milieu. Beyond the connection of obesity to carcinogenesis at a molecular and cellular level, clinicians should always be cognizant of the fact that obesity might have secondary impacts on the diagnosis and treatment of gynecologic cancer, including limited access to effective screening programs, resistance to chemotherapy and targeted therapies, persisting lymphedema, etc. Metabolic bariatric surgery represents an attractive intervention not only for decreasing the risk of carcinogenesis in high-risk women living with obesity but most importantly as a measure to improve disease-specific and overall survival in patients with diagnosed obesity-related gynecologic malignancies. The present narrative review summarizes current evidence on the underlying pathophysiologic mechanisms, the clinical data, and the potential applications of metabolic bariatric surgery in all types of gynecologic cancer, including breast, endometrial, ovarian, cervical, vulvar, and vaginal.
Breast Cancer Obesity is among the most robustly documented risk factors for the development of breast cancer 27 . Interestingly, delving into biological processes will make us realize that obesity serves far beyond a mere risk factor for breast cancer 10 . On a phenotypical level, not all breast cancers are the same, nor are they inherently related to obesity. In premenopausal women, there seems to be an inverse association of estrogen receptors (ER + cancers) with obesity (as per BMI and per adiposity), a positive correlation between obesity and triple-negative cancers (43-80% higher risk), and a nonsignificant association between HER2 + status and obesity 28 . On the contrary, there seems to be an increased risk of ER + tumors, increased incidence of triple-negative breast cancers (TNBC), and worse overall survival in HER2 + carriers regarding postmenopausal patients living with obesity who develop breast cancer 28 . According to a recent meta-analysis, obesity had a negative impact on disease-free survival (DFS) and overall survival (OS) for all breast cancer subtypes: the hazard ratios (HR) regarding DFS were 1.26 (95% CI 1.13-1.41) for hormone receptor positive/HER2 negative tumors (HR + HER2 - ), 1.16 (95% CI 1.06-1.26) for HER2 + cancers, and 1.17 (95% CI 1.06-1.29) for TNBC, whereas the respective values regarding OS were 1.39 (95% CI 1.20-1.62) for HR + HER2 - tumors, 1.18 (1.05-1.33) for HER2 + tumors, and 1.32 (95% CI 1.13-1.53) for TNBC 29 . Interestingly, these correlations were only applicable to obesity, as no significant association was shown between simply overweight and DFS or OS of breast cancer, possibly suggesting a linear association between the severity of obesity and susceptibility to developing breast cancer. On a molecular level, the interplay between obesity and breast carcinogenesis is heralded expansion, inflammation, and dysfunction of the adipose tissue 30 . These alterations foster at least four major molecular conditions 28 , plus newly-discovered ones: 1) Hyperinsulinemia and increased levels of insulin growth factor-1 and -2 (IGF-1 and IGF-1), as a result of high circulating levels of free fatty acids (from increased lipolysis) and glucose (from gluconeogenesis) and the subsequent development of peripheral insulin resistance 31 . The same conditions also lead to decreased sex hormone binding globulin (SHBG), which increases the levels of free estrogens 31 . In turn, insulin and IGF increase have pleotropic and interactive effects on a subcellular level: i) conjugation of IGF-1 with its receptor (IGF-R) leads to activation of multiple kinase downstream pathways, the end result of which is endocrine-resistant cell growth; ii) there is crosstalk between IGF-R and insulin receptor that has an additive effect on hormonal independence; iii) activation of IGF-R by IGF-2 induces the activation of the epidermal growth factor receptor (EGFR), which attributes proliferation independence to affected cells; and iv) intracellular androgen receptors (AR) and estrogen receptors (ER) induce hormonal independence via IGF-independent activation of the IGF-R 32 . 2) Imbalance of adipokines ( aka adipose-derived cytokines), i.e., increase in leptin, interleukin 6 (IL-6), and tumor necrosis factor alpha (TNF-α), and decrease in adiponectin, which consequently induce the expression of their respective receptors. In turn, conjugation of leptin with its receptor (ObR) leads not only to activation of multiple signal transduction pathways, but also to augmented cross-talk with other receptors, including epidermal growth factor (EGFR), Notch, ER, and interleukin (IL) receptors 32 . The end results are multiple: cell proliferation (via cyclin D1), inhibition of apoptosis (via Bcl-2 family and surviving), increase of oncogenic signals (hypoxia-inducible factor-1 or HIF-1a, heat shock protein 90 or Hsp90), modifications of the extracellular matrix and facilitation of metastasis (via metalloproteases or MMPs and serpin), and angiogenesis (via vascular endothelial growth factor or VEGF) 32 . On the other hand, low levels of adiponectin lead to differential activation of its receptor via alternative signal transduction (induction of Ras-MAP kinase and mTOR pathways instead of activation of the PGC-1α pathway and inhibition of mTOR) 33 . 3) Increased activity of aromatase, which is induced by several pathways (IGF-1, leptin, prostaglandin E2 (PGE2), TNF-α, and LI-1β) and leads to increased estrogen synthesis and promotion of estrogen receptor (ER) expression 32 . Furthermore, aromatase activity is amplified by estradiol itself in a positive feedback manner and is further enhanced by inhibition of aromatase dephosphorylation which leads to enduring aromatase action 32 . The net effect is an increase in aromatase expression and activity, increase in estradiol production and bioavailability, and enhanced ER activation. Besides, it is evident that there are two pathways for increased estrogen secretion, one via increased lipolysis (see number 1) and one via increased aromatase activity. Increased estrogens, in turn, have three major consequences: i) increased estrogen metabolism which leads to the production of toxic products, such as reactive oxygen species and quinones, ii) increased cellular proliferation that leads to replication stress, and iii) decreased DNA damage repair 34 . The cumulative result of these processes is additive DNA damage, which promotes tumorigenesis. 4) Increased synthesis of cholesterol, which leads to defective sterol regulated element binding protein-1 and -2 (SREB-1 and SREB-2) expression and further upregulation of the hydroxylmethylglutatyl-receptor (HMGR) 28 . Furthermore, 27-hydorxy cholesterol (27-OHC), a metabolite of cholesterol hydroxylation, has been found to serve as an endogenous selective estrogen receptor modulator (SERM) 35 . Additionally, 27-OHC competitive binds to liver X receptor (LXR) and cancels the physiological effect of LXR, which is inhibition of cell proliferation 35 . 5) Enhanced function of fatty acid binding protein (FABP4), a protein that facilitates the absorption and utilization of water-insoluble dietary long-chain fatty acids, has been proposed as one of the novel mechanisms that propagates breast cancer development in the context of obesity 36 . FABP4 secreted by tumor-associated macrophages and circulating FABP4 secreted by dysfunctional adipocytes leads, via various signal transduction pathways, to enhanced stem cell-like phenotype and tumor progression 36 . 6) Micro RNA (miRNA) constitutes another relatively novel mechanism that plays a key role in breast tumorigenesis in the context of obesity. MiRNA is a relatively recently discovered class of RNA regulatory genes with multiple implications on structural, catalytic, and regulatory cellular functions. More than 2500 members of this class of molecules have been discovered and their upregulation or downregulation has been correlated with various disease processes. Relative to our subject, there are breast cancer-associated miRNAs (upregulation of miR-20, downregulation of miR-46), obesity-related miRNAs (upregulation of miR-23, downregulation of miR-14), miRNAs common to breast cancer and obesity (let-7, miR-21, -30c, -31, -93, -124, -143, -155, -181a, -221/222, -326, 335), and miRNAs common in breast cancer and obesity-associated breast cancer (upregulation of miR-302b, downregulation of miR-498) 37 . MiRNAs also have different functions: some are related to tumor suppression (i.e., let-7, -200, -205, -145) and are downregulated in the context of breast cancer, some serve as oncogenic signal (miR-10, -17, -21, -155) and are upregulated in breast tumorigenesis, and some propagate metastasis (miR-9, -36, -10b, -37, -38, -21, -39-45, -29a, -46, -373/520) 38 , 39 . On a nuclear level, the aforementioned molecular mechanisms converge on three discrete families of transcription factors (TFs): hypoxia-induced factor (HIF), p53, and estrogen receptor 40 . Besides, different mechanisms prevail in different cell types. For example, fat-rich diet and hypoxia act directly on immune cells facilitating their activation and the release of inflammatory cytokines (PGE2, IL-6, TNF-α), while at the same time enhance glucose uptake, aerobic glycolysis, and cell proliferation; leptin, IL-6, TNF-α, and PGE2 act on adipose stromal cells and promote glucose uptake, aerobic glycolysis, estrogen production, and cell proliferation via the HIF and p53 families of TFs; finally, insulin, leptin, PGE2, and estradiol act on tumor cells and induce glucose uptake, aerobic glycolysis, protein synthesis, nucleotide synthesis, and cell proliferation primarily via the estrogen receptor family of TFs 40 . At the same time, it has recently been shown that increased BMI modifies the levels of tumor-infiltrating lymphocytes (sTILs), thus decreasing pathological complete response (pCR) rates and survival in TNBC patients 41 . The key players for orchestrating all these processes and secreting pivotal molecules are adipose tissue-derived mesenchymal stromal/stem cells (ASCs/MSCs) 42 , 43 . The alterations that these mechanisms bring upon, under the influence of single-nucleotide polymorphisms (SNPs) and epigenetic modifications (obesogenic dietary patterns, unhealthy foods, sedentary lifestyle and lack of exercise), affect all stages of tumorigenesis, including initiation, progression, migration, invasion, and metastasis 28 , 30 , 38 , 39 , 44 . The clarification of these mechanisms (summarized in Figure 1 ) does not have only theoretical implications on the correlation between obesity and breast tumorigenesis. It can also serve as a scaffold to interpret why various interventions to intercept obesity yield clinical benefit on breast cancer prevention, improved response to oncological therapy, and improved prognosis and increased survival after the diagnosis of breast cancer 45 . Beyond its metabolic sequelae (which include type 2 diabetes mellitus and cancer), obesity has further implications regarding mechanical, monetary, and mental issues, as vividly illustrated by A. Sharma (the “4 Ms” of obesity) 46 . In this context, there is compelling evidence that obesity constitutes a considerable barrier for patients to participate in appropriate breast cancer screening programs, irrespective of geographical boundaries 47 - 50 . Similar barriers have been also found regarding cervical cancer screening 47 - 55 . Besides, cancer patients who suffer from obesity (and their healthcare providers) must face some additional, more practical challenges. In this context, there are reports that suggest higher recurrence rates in obese patients who undergo breast conserving surgery (BCS) as compared to normal-weight patients. The evidence is even clearer on a less favorable cosmetic outcome in the obesity group after BCS. Obesity has also been linked to more postoperative complications after mastectomy and increased failure rate of sentinel lymph node mapping. Equally challenging is breast reconstruction in obese individuals following mastectomy for breast cancer, owing to high complication rates and suboptimal aesthetic outcome 56 . Obesity also has implications in the adjuvant therapeutic setting: patients with large breasts may receive increased doses of radiation, chemotherapy may have increased toxicity and failure rates in the context of obesity irrespective of tumor size, nodal status, and hormone receptor status, whereas aromatase inhibitors may be less effective in overweight and obese populations 56 . Eventually, patients living with obesity and breast cancer are at greater risk for developing lymphedema, both before and after mastectomy 56 . A pertinent meta-analysis showed that weight-loss interventions lead to a decreased volume of both the affected and unaffected arms but failed to show a significant decrease in the severity of breast cancer-related lymphedema 57 . Dietary modifications, including chronic caloric restriction, time-restricted feeding, fasting, fasting-mimicking diets, intermittent energy restriction, ketogenic diet, and Mediterranean diet, have demonstrated attributes of cancer prevention, restoration of the adipokine balance, improved insulin sensitivity, reduced synthesis of cholesterol and its byproducts, reduced systematic inflammation, and reduced toxicity of chemotherapy 58 . Physical activity also seems to have a beneficial impact on modifying the risk of developing breast cancer in women living with obesity 59 . Even most importantly, both obesity and interventions to curb obesity seem to affect the prognosis and survival in patients who have already been diagnosed with breast cancer, to opposite directions each 60 . In a recent meta-analysis, Pane Y et al. have demonstrated that increased adiposity is linked to significantly elevated all-cause mortality (RR 1.21, 95% CI 1.15-1.27), breast cancer-specific mortality (RR 1.22, 95% CI 1.13-1.32), locoregional recurrence (RR 1.12, 95% CI 1.06-1.18), and distant recurrence (RR 1.19, 95% CI 1.11-1.28) in breast cancer survivors 61 . Similarly, worse outcomes regarding DFS and OS have been shown for patients with early breast cancer 62 . Interestingly, another meta-analysis demonstrated worse survival rates with increased adiposity based on anthropometric criteria (OS 1.30, 95% CI 1.15-1.46; cancer-specific survival 1.26, 95% CI 1.03-1.55), but failed to prove so as per imaging-measured adiposity 63 . Besides, an earlier Cochrane meta-analysis and a recent meta-analysis of randomized controlled trials have shown that weight loss programs (and particularly multimodal interventions including diet, exercise, and psychosocial support) in breast cancer survivors result in significant weight loss and reduction of adipose tissue without increasing adverse effects 64 , 65 . However, they have also failed to show a clear benefit for survival. This makes the appeal for more radical solutions, such as MBS, very relevant. Metabolic bariatric surgery (MBS) is the most effective treatment for obesity and related metabolic disorders nowadays 66 , and this also holds true regarding the role of MBS in the prevention of breast cancer in this population, according to relevant literature 67 . Evidence before 2021 is summarized in a meta-analysis of 11 studies, comprising >1,100,000 patients in total. Breast cancer was diagnosed in 0.54% of post-MBS patients versus 0.84% in controls (RR 0.50, 95% 0.37-0.67). Most importantly, the beneficial effects of MBS were evident for advanced stage disease (stage III or IV, RR 0.50, 95% CI 0.28-0.88), in particular 68 . In 2022, Doumouras et al. documented the incidence of breast cancer in 25,448 women (12,724 post-MBS versus an equal number of matched controls): 0.79% in the surgical wing versus 1.09% in the non-surgical one [adjusted HR 0.81 (95% CI 0.69-0.95) at 1 year, 0.76 (95% CI 0.59-0.99) at 7 years] 69 . One year later, the same authors retrospectively assessed the risk of breast cancer in a cohort of 69,260 females 70 . The non-operated group had a significantly increased hazard for developing breast cancer at 1 year (HR 1.38, 95% CI 1.21-1.58), 2 years (HR 1.31, 95% CI 1.12-1.53), and 5 years (HR 1.38, 95% CI 1.21-1.58). The interesting fact about this study is that the authors estimated the residual risk after MBS, i.e., they compared women who had lost weight with MBS with a sub-cohort of women with BMI <25 Kg/m 2 71 . In this subgroup analysis, the study failed to show any significant difference in the incidence of breast cancer between the two groups. This observation has multiple implications: further analysis of women who reached a BMI <25 Kg/m 2 post-MBS is warranted; BMI itself might be a handy index, but is inaccurate and oftentimes misleading as a measure of obesity, as it does not take adiposity into account; breast cancer, as is the case with every cancer, is a multifactorial process, including genetic predisposition, family history, personal history of high-risk lesions or irradiation etc., thus the effect of obesity is obscured by multiple confounders with potentially stronger influence 71 - 74 . Other authors have investigated the impact of MBS on breast cancer incidence as part of a cumulative investigation regarding gynecologic malignancies. In the meta-analysis of Ishihara et al. the risk of breast cancer was reduced by 49% post-MBS (RR 0.51, 95% CI 0.31-0.83; Table 2 ) 25 . Moreover, Tsui et al. demonstrated a breast cancer incidence of 1.50% in the surgical group (N = 55,781) versus 1.75% in the non-surgical group (N = 247,107, p <0.0001) 23 . Several studies have proceeded to further analysis of the impact of MBS on breast cancer according to receptor status. When comparing 2,430 post-MBS patients to 2,430 matched non-surgical females, Hassinger et al. found reduced overall breast cancer incidence (0.7% versus 1.3%, p = 0.03), lower incidence of invasive breast cancer (0.6% versus 1%), and lower incidence of ER + tumors (36.4% versus 70%, p = 0.04) in the post-bariatric group 75 . Post-MBS patients also featured lower rates of PR + and higher rates of HER2 + cancers, but these differences were non-significant. Furthermore, in a retrospective analysis of 301 pre-menopausal and 399 post-menopausal post-bariatric women compared to 53,889 non-bariatric controls, Feigelson et al. found a 37% reduction in the overall risk of breast cancer after MBS. Moreover, they showed that ER + tumors were less prevalent in the post-bariatric group, but this finding was significant only for postmenopausal women (premenopausal: HR 0.84, 95% CI 0.62-1.13; postmenopausal: HR 0.52, 95% CI 0.39-0.70) 76 . Besides, Heshmati et al. showed that post-MBS patients have lower risk of developing HER+ tumors as compared to their non-operated counterparts (OR 0.16, 95% CI 0.03-0.76). Interestingly, this study did not show any difference between groups regarding hormone receptor status 77 . Likewise, the recent study of Doumouras et al. failed to show any significant difference between the bariatric group and the various BMI subgroups regarding hormone receptor and HER2 status 70 . The role of MBS as secondary prevention after the manifestation of breast cancer deserves special mention, as vigorous pertinent research is underway. In a case series of 13 patients, Zhang et al. found that MBS following the diagnosis and treatment of breast cancer at a median interval of 3 years is feasible and safe 78 . In a much larger, population-based study of 395,146 breast cancer survivors, Lee et al. found that MBS that was performed after the diagnosis of cancer was associated with a non-significant decrease in mortality (cause-specific HR 0.48, 95% CI 0.15-1.53). However, after adjustment for age, stage, comorbidity, race/ethnicity and socioeconomic status, post-diagnosis MBS was associated with a decreased mortality risk (HR 0.37, 95% CI 0.01-0.99) in the entire cohort, that also comprised endometrial cancer survivors 79 . Despite the inherent methodological limitations of this study (i.e., survivorship bias owing to its retrospective design), it can serve as a primer for future investigation on the benefits of MBS as a measure to radically improve survival following breast and other obesity-related cancers 80 . Besides, there is initial evidence that MBS might improve the response to therapy for breast cancer. Sipe et al. investigated the role of sleeve gastrectomy with regards to the response to immune checkpoint blockage in a rodent animal model and found that surgical weight loss followed by immunotherapy with anti-PDL-1 (anti-programmed death ligand-1 antibodies) in formerly obese mice resulted in reduced cancer burden and favorable locoregional immune milieu 81 . As noted earlier, relevant research on the role of MBS in improving the disease burden in breast cancer survivors is still ongoing and a promising future on expanding the indications of MBS lies ahead 67 . In summary, there is adequate evidence that obesity is etiologically linked to breast cancer rather than merely serving the role of a risk factor. Metabolic bariatric surgery consistently seems to be an effective intervention for curbing the risk of developing breast cancer in patients living with obesity, whereas emerging evidence suggests that MBS could radically improve the prognosis in breast cancer survivors. Endometrial Cancer Cancer of the uterine corpus along with (postmenopausal) breast cancer, share the first two positions of the most common obesity-related malignancies, by incidence in the general population 82 . According to IARC, endometrial cancer bears the highest relative risk among obesity-related cancers (7.1, 95% CI 6.3-8.1). For the purposes of this review, we will focus on the relationship of endometrial cancer with obesity and its management. Obesity exerts its effects on the endometrium via three processes primarily, i.e., increased insulin (secondary to insulin resistance), increased aromatase, and imbalanced adipokines (increased leptin, decreased adiponectin), pretty much as is the case with the breast 83 , 84 . Insulin increases the levels of bioactive IGF-1, both directly in the circulation and indirectly in the endometrium, via a decrease in the IGF-binding globulin (IGFBP) 83 . Circulating IGF-1 stimulates the production of androgens by the ovary. This creates a condition of chronic anovulation with a subsequent decrease in progesterone and adiponectin, which also negatively affect IGFBP production in the endometrium 83 , 85 . Eventually, insulin decreases SHBG, which, along with aromatase, increases the levels of bioactive estrogens. These estrogens increase the levels of IGF1 in the endometrium. Taken together, high IGF1 and low IGFBP in the endometrium lead to increased levels of bioactive IGF1 and this has a hyperplastic and subsequently dysplastic and carcinogenic effect on the endometrium, via the RAS-MAPK and PI3K-AKT-mTOR signal transduction pathways 83 , 84 , 86 . Normally, adiponectin has an inhibitory effect on the AKT/mTOR signal transduction, but in the context of obesity its low circulating levels lead to attenuation of this phenomenon 87 . Additionally, leptin binds to its ligand on the endometrial cell and exerts its biologic effects via the JAK/STAT pathway 88 . All these processes take place and prosper in a local and systematic environment of altered immunity. Studies have demonstrated a link between obesity, endometrial cancer, and increased levels of CRP, IL-1Rα, IL-6, and tissue CD8+ cells. Most importantly, the levels of these components of immunity seem to return back to normal upon effective weight loss following bariatric surgery 89 , 90 . A condition of paramount importance for the development of this microenvironment seems to be hypoxia. Tissue hypoxia upregulates HIF-mediated transcription and has pleiotropic sequelae related to adverse prognosis in the context of endometrial cancer (as well as other cancer types), including increased cell proliferation; stemness, epithelial-mesenchymal transition (EMT) and aggressive phenotype; metabolic adaptation and drug efflux resulting in resistance to chemotherapy; vasculogenesis (vasculogenic mimicry) and angiogenesis (vascular remodeling); and finally invasive and metastatic potential 91 - 93 . The role of miRNA has also started to be investigated in the context of obesity-related endometrial cancer and relevant research is still ongoing 94 . In brief, the etiopathogenetic similarities to obesity-related breast cancer are obvious. Figure 2 graphically recapitulates the available evidence on the underlying mechanisms that connect obesity and endometrial tumorigenesis. Clinical evidence supports basic science in the association of obesity with endometrial cancer. Shaw et al. found a pooled effect estimate (pEE) of 2.32 (95% CI 2.09-2.58) among case-control studies, 2.49 (95% CI 2.27-2.73) among cohort studies, and 2.65 (95% CI 2.42-2.90) in total for patients living with obesity 95 . The respective figures were 6.54 (95% CI 4.98-8.35), 3.74 (2.94-4.76), and 4.84 (95% CI 3.92-5.97) in patients suffering from severe obesity, indicating a linear relationship between the risk for developing endometrial cancer with increasing body weight. Similar numbers were observed when the risk of endometrial carcinogenesis was analyzed by adiposity [pEE = 2.30 (95% CI 1.71-3.09), 1.92 (95% CI 1.57-2.35), and 1.43 (1.33-1.54), although the magnitude of the effect was a bit lower compared to body weight metrics. Importantly, BMI was associated with increased all-cause and endometrial cancer-specific mortality, particularly in the group of those suffering from severe obesity (pEE = 2.06 (95% CI 1.55-2.74) 95 . Another recent meta-analysis of 11 cohort studies by the Epidemiology of Endometrial Cancer Consortium, with 14,859 case and 40,895 controls, found a positive correlation of obesity in adulthood (OR 2.85, 95% CI 2.47-3.29) and early adulthood (OR 1.26, 95% CI 1.06-1.50) with the risk of endometrial cancer 96 . These outcomes seem to be generally universal across different ethnic groups, regarding both clinical metrics (BMI, waist circumference) and endometrial cancer-related biomarkers (IGF-1, leptin, adiponectin, IL-1, IL-6) 89 , 97 - 100 . Additionally, Wise et al., in a meta-analysis of 3 case-control studies, found that a BMI ≥30 Kg/m 2 is significantly associated with endometrial cancer in premenopausal women, warranting increased awareness in this age group regarding the beneficial role of losing weight for potentially preventing the manifestation of endometrial cancer 101 . In this regard, it is well known that premenopausal and postmenopausal endometrial cancers are two biologically distinct entities. The former manifest at a younger age and are linked to obesity, lipid, and metabolic disorders, are estrogen-dependent and related to a thickened endometrium, bear endometrioid histology, have molecular associations with PTEN, MSI, PI3K/AKT, and KRAS, and generally portend a good prognosis. Conversely, the latter manifest at an older age, are remotely associated with obesity, are estrogen-independent and associated with atrophic endometrium, have poor differentiation, are linked to p53, Her2, PI3/AKT, and KRAS, and have an overall worse prognosis 86 , 102 . Nevertheless, data from the Women's Health Initiative, comprising 86,937 postmenopausal individuals, showed that an increased risk of endometrial cancer was evident in women with elevated BMI (HR 1.76, 95% CI 1.41-2.19) and waist-to-hip ratio (WHR; HR 1.33, 95% CI 1.04-1.70), thus defying the notion that postmenopausal endometrial cancer is not related to obesity 103 . An interesting element that results from these studies is the defective role of BMI as a metric of obesity. Population-based studies have demonstrated a discrepancy between linear and non-linear models of predicting the incidence of endometrial cancer among women living with obesity. This non-linearity can be attributed to growth-promoting threshold effect (i.e., “second hit” mechanisms beyond the established ones boost the incidence of carcinogenesis after a certain BMI value), loss of regulatory inhibitory effect (this “second hit” mechanism(s) abolishes the inhibitory mechanisms that keep the initiation mechanisms under control), multiplicative interaction (i.e., the underlying mechanisms act synergistically and the final outcome is greater than the mere addition of the individual parts), or “treatment effect” secondary to vigilance and aggressive prevention towards severe obesity but not towards overweight and low-stage obesity 104 . Despite these concerns, a recent study showed that uterine cancer was among those malignancies for which BMI was an accurate predictor based electronic health records and prespecified cut-off points 105 . Obesity is not only a risk factor for developing endometrial cancer but might have an impact on prognosis and survival after the diagnosis of endometrial cancer. Although there is evidence that obesity increases cardiovascular and all-cause mortality in endometrial cancer survivors 106 , 107 , there are conflicting reports regarding its impact on disease progression. For instance, some authors have claimed that obesity is linked to improved DFS in advanced-stage (3 and 4) non-endometrioid endometrial cancer 108 , whereas other publications have stated the exact opposite 109 . Two recent studies attempt to shed light with regards to the impact of obesity on survival after the diagnosis of endometrial cancer. Lees et al. showed that, among other examined risk factors, obesity itself leads to an increased all-cause mortality (HR 1.77, 95% CI 1.36-2.31), but is not related to cardiovascular or endometrial cancer-specific mortality (95% CI 0.92-2.32 and 0.83-3.93, respectively) 110 . Besides, Kokts-Porietis observed that an increase of BMI of >5% within 1 year before the diagnosis of endometrial cancer results in a twofold decrease in OS and DFS in endometrial cancer patients 111 . In brief, the impact of obesity on cancer-specific survival is a field of ongoing investigation, as current evidence does not suffice for drawing safe conclusions. However, given the negative impact of obesity on overall survival warrants increased vigilance. In this context, there is great interest in the role of increasing awareness about the risks of obesity in high-risk women and survivors of endometrial cancer, and most importantly in the value of attenuating obesity as a means of secondary prevention against cancer recurrence. A study of 93 women (mean age 44.9 years, mean BMI 48.7 Kg/m 2 ) who had enrolled to a bariatric surgery program found that, although 66% of the participants acknowledged that obesity is a risk factor for uterine carcinogenesis, less than half (48%) identified themselves as being at risk, although they suffered from obesity themselves 112 . Conversely, another study found that endometrial cancer survivors failed to correctly classify their weight, as only 32% of the participants in the BMI range of 30-34.0 Kg/m 2 and 72.7% in the BMI range of 35-39.9 Kg/m 2 identified themselves as living with obesity 113 . Haggerty et al. yielded similar results in their survey, in which one third of participants declared being unaware of any association between obesity and endometrial cancer 114 . However, 59% were eager to follow a weight loss intervention, pointing out a potential opportunity for weight management in this population group 114 . A comparable rate of interest in MBS (61.2%) was documented more recently by Wiley et al. 115 . An effective strategy that has been suggested to increase endometrial cancer patient awareness regarding obesity is quality improvement through structured multidisciplinary programs 116 . Besides, Njoku et al. have acknowledged that there are several gaps in all tiers of the linkage between obesity and endometrial cancer, from estimating the actual risk to implementing risk-reducing interventions, and from understanding the underlying pathophysiologic mechanisms to implementing established prevention measures, including MBS 117 . MBS seems to be an effective measure against the manifestation and progression of endometrial cancer. On molecular grounds, it has been shown that MBS restores the levels and function of key players of endometrial tumorigenesis, including (in accordance with the mechanisms described earlier) biomarkers of cellular proliferation (Ki-67), signal transduction (pAKT), insulin resistance (HbA1c, HOMA-IR), and inflammation (CRP, IL-6) 118 . An additional benefit shown in this study was the restoration of fertility, as it was documented by the normalization of luteinizing hormone (LH), follicle stimulating hormone (FSH), and SHBG 118 . On clinical grounds, several relevant studies and meta-analyses have been published 119 - 121 . One of the first papers that denoted the benefits of MBS on reducing the risk for endometrial pathology and is worth mentioning because of it prospective design is the study by Argenta et al. in 59 women who underwent MBS and in whom endometrial biopsy was obtained 122 In this study, the prevalence of occult endometrial pathology was 6.8% at the time of MBS and 6.5% at 1-year follow-up, with resolution of hyperplasia in 2 women, persistent hyperplasia in another 2, and de novo hyperplasia in 1 122 . Later, the seminal SOS study showed that endometrial cancer was the only gynecologic cancer that had a statistically significant long-term benefit following MBS (HR 0.56, 95% CI 0.35-0.89), although all female cancers (except cervical) were linked to reduced incidence after bariatric surgery (but without statistical significance) 26 . In the most recent meta-analysis of 7 relevant index studies by Ishihara et al., the risk of endometrial cancer was reduced by 67% after MBS (RR 0.33, 95% CI 0.21-0.51; Table 2 ) 25 . Additionally, in the study of Tsui et al. the incidence of endometrial cancer was 0.47% in the surgical group (N = 55,781) versus 0.76% in the non-surgical group (N = 247,107, p <0.0001) 23 . More recently, Khalid et al. showed that the incidence of endometrial cancer was significantly higher in non-operated females living with obesity as compared to those who had undergone LSG or RYGB (0.86% versus 0.56% versus o.60%, p = 0.007) after 5 years of follow-up. The respective OR was 0.65 (95% CI 0.46-0.92) for LSG and 0.70 (95% CI 0.50-0.98) for RYGB 17 . Notably, there is a recent report that defies the benefit of MBS in the incidence of endometrial cancer and disease-specific survival, however it should be acknowledged that the retrospective nature of the study in combination with the small sample size do not allow for generalization of the conclusions 123 . Equally dubious are the results of a relevant systematic review regarding the impact of MBS on endometrial hyperplasia; the authors recognize the scarcity of data along with its poor quality 124 . Despite the conflicting data on the role of obesity in disease progression of endometrial cancer survivors, the role of MBS as a secondary prevention measure has started to be investigated. In a recently published case series of 5 patients with endometrial cancer diagnosis, all patients experienced regression of their cancer within 6 months following MBS, along with other obesity-related medical problems 125 . The much larger population-based study by Lee et al. that was mentioned earlier for breast cancer, also investigated the role of MBS as a secondary prevention intervention for 69,859 survivors of endometrial cancer. The reduction in mortality risk for endometrial cancer was also non-significant (HR 0.23, 95% CI 0.03-1.70), but it should be reminded that the mortality risk of the cohort overall was decreased 79 . In brief, further investigation is warranted to validate the impact of MBS on survivorship following the diagnosis of endometrial cancer. In brief, evidence shows a mechanistic correlation between obesity and the manifestation of endometrial cancer. Metabolic bariatric surgery is an effective measure for preventing the development of endometrial cancer, especially in high-risk women, but its role as an intervention to extend survival after the diagnosis of endometrial cancer remains elusive. Ovarian Cancer Ovarian cancer is the third gynecologic cancer that is recognized to have clear association with obesity, according to IARC 7 . The underlying mechanisms again fit the pattern described earlier for endometrial and breast cancer including hyperglycemia, insulin resistance and IGF-1, deranged adipokine levels (increased leptin, TNF-α, interleukins, decreased adiponectin), inflammatory cytokines and VEGF, and altered levels of steroid hormones 126 , 127 . Nevertheless, determinants of which phenotype (i.e., endometrial, breast or ovarian cancer) will manifest in each patient remain to be discovered, although genetic predisposition, SNPs, epigenetic factors and the metabolomic milieu obviously play an important role in this regard. For example, there is some evidence that dysregulated lipid synthesis and metabolism has a role in increasing ovarian tumorigenesis in the context of obesity 127 . On clinical grounds, evidence is conflicting, as it is noted by a systematic review of 43 studies with almost 3.5 million participants: 14 studies found a significant correlation between obesity and ovarian cancer, 26 studies failed to show any such association, whereas 3 studies found an inverse relationship between the two entities 128 . A recent comprehensive meta-review of systematic reviews and meta-analyses attempted to investigate all factors that are potentially associated with the development of ovarian carcinogenesis. Obesity and overweight were identified in 5 studies collectively among 226 included reviews in total, the former bearing a RR of 1.27 (95% CI 1.19-1.36, I 2 0%) and the latter 1.07 (95% CI 1.04-1.10, I 2 0%) 129 . The role of obesity in ovarian cancer survival has also been investigated by two meta-analyses, which found comparable relative risks with regards to survival between individuals living with obesity and ones with normal-range BMI, but with different statistical significance, according to the included studies (HR 1.17, 95% CI 1.03-1.34 versus 1.11, 95% CI 0.97-1.27) 130 , 131 . Moreover, there was a inversely proportional relationship between survival and incremental increase of BMI 131 . On the contrary, in the meta-analysis by Cheng et al., no correlation was found between imaging-measured adiposity and overall or progression-free survival of ovarian cancer 63 . Data on the effect of MBS on the development of ovarian cancer stems from collective gynecologic cancer studies. According to the seminal SOS study, MBS had the strongest inverse effect on the incidence of ovarian cancer among all gynecologic cancers in the long run, but this effect was not statistically significant (HR 0.51, 955 CI 0.24-1.10) 26 . The meta-analysis of Ishihara et al. demonstrated a similar reduction of 53% in the risk for ovarian cancer, but their outcome reached statistical significance (RR 0.47, 95% CI 0.27-0.81, I 2 0%) 25 . Furthermore, the population-based study of Khalid et al. found an ovarian cancer incidence of 0.43% in non-operated females versus 0.18% post-LSG and 0.15% post-RYGB ( p = 0.001), resulting in a risk reduction of 58% (OR 0.42, 95% CI 0.24-0.73) for LSG and 66% (OR 0.34, 95% CI 0.19-0.63) for RYGB 17 . Finally, Tsui et al. found an ovarian cancer incidence of 0.18% in the post-bariatric group versus 0.28% in non-operated women ( p <0.0001), with the individual incidences being 0.06% after LSG and 0.09% after RYGB ( p = 0.0283) 23 . This data shows that ovarian cancer is underrepresented in current literature as compared to breast and endometrial cancer, with regards to its correlation with obesity and the impact of MBS on its incidence and survival. Available evidence has demonstrated potential benefit from MBS, an observation that needs to be validated by dedicated ovarian cancer-oriented studies. Closing remarks This extensive review of pathophysiological mechanisms and recent evidence on the impact of obesity and metabolic bariatric surgery on the development, the progression, and the prognosis of gynecologic malignancies has drawn the following conclusions: At present, breast, endometrial, and ovarian cancers are considered established obesity-related neoplasms. The relationship between obesity and obesity-related gynecologic cancers is beyond that of a mere risk factor - evidence from basic research and epidemiological projections support the claim that obesity and tumorigenesis are inherently connected at a molecular and pathophysiologic level. Insulin resistance and increased IGF-1, disruption of the adipokine equilibrium with increased leptin, interleukins, and TNF-α and decreased adiponectin, augmented peripheral aromatization and production of estrogens, tissue hypoxia and locally disrupted inflammation, and a pivotal role of mesenchymal adipose tissue cells seem to be core motifs in the pathogenesis of obesity-related gynecologic cancers. Beyond the connection of obesity with certain types of gynecologic cancer (i.e., the metabolic complications of obesity), the clinician should also keep a mind to other potential sequelae of obesity (i.e., mechanical , mental , and monetary complications). In this regard, obesity might represent a substantial obstacle in the access of women to effective screening programs, or might have secondary effects such as resistance to chemotherapy and targeted therapies, persisting lymphedema, etc. Metabolic bariatric surgery can serve as a primary prevention measure against obesity-related gynecologic cancers in high-risk female populations and generally women living with obesity. Most importantly, metabolic bariatric surgery might hold a pivotal role as a secondary prevention measure in increasing disease-specific and overall survival in patients already diagnosed with obesity-related gynecologic cancers. Reinforcement of relevant evidence will potentially lead to expansion of the indications of MBS and add a safe, effective, and robust intervention in the armamentarium of healthcare professionals who deal with oncologic patients in the context of multidisciplinary management. Interpretation of the above-mentioned correlations should be careful, given the retrospective nature of the majority of relevant studies, particularly when it comes to the role of MBS as a measure of secondary prevention in patients who have already manifested one of the gynecologic cancers with an established link to obesity. Ideally, carefully designed randomized trials could establish a causal relationship between MBS and an increase of survivorship after gynecologic cancer. However, we acknowledge the technical difficulty, the considerable cost, and the potential ethical issues of carrying out such meta-analyses in cancer survivors. The implementation of novel research methods, such as the analysis of big data contained in population-based studies and registries with machine learning 132 , 133 , might serve as reliable alternatives for investigating the role of MBS in preventing gynecologic cancer recurrence and extending survivorship.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1077-1092
oa_package/93/22/PMC10788728.tar.gz
PMC10788729
0
1. Introduction Cervical cancer is the fourth leading cause of cancer death in women 1 , 2 . Uncontrolled chronic inflammation is frequently a significant and common factor contributing to cancer development and metastasis. Additionally, tissues affected by chronic inflammation often display disrupted microenvironments 3 . Tumor microenvironment (TME) is a complex and highly heterogeneous dynamic comprehensive system, which is mainly composed of tumor cells, tumor-associated fibroblasts (CAFs), tumor-associated immune cells, and micro-vessels 4 , 5 . These cells are highly plastic and can continuously change their phenotype and function, forming an inflammatory microenvironment conducive to tumor development through direct cell-cell contact or dynamic crosstalk between soluble factors such as cytokines, chemokines, and growth factors 6 . It is well known that persistent human papilloma-virus (HPV) infection is closely related to cervical cancer and, consequently, inflammation plays a key role in the occurrence and development of cervical cancer 7 , 8 . A variety of inflammatory mediators in the inflammatory microenvironment, such as growth factors, cytokines, and hormones, usually bind to nicotinamide adenine dinucleotide phosphate oxidase (NOX) family on the membrane of cancer cells to catalyze the generation of reactive oxygen species (ROS) which are important developmental and physiological stimuli. NOX4 is predominantly detected in human tumor cell lines and has been linked to various cellular processes, including the formation of invadopodia 9 , cell proliferation 10 , differentiation 11 , and epithelial-mesenchymal transition (EMT) 10 . Transforming growth factor-β1 (TGF-β1) is one of the mediators in the inflammatory microenvironment of tumors, which is mainly manifested as a promoter during cancer progression. More importantly, cancer cells can utilize TGF-β signaling to induce EMT 12 . The elevated ROS is found in pathological conditions such as cancer and is a major mediator contributing to the progression of cancers 13 , 14 . Hydrogen peroxide (H 2 O 2 ) is the most abundant and stable ROS in living cells, low concentration of which was confirmed as a second messenger and played an important role in the biological processes under physiological conditions and cancer progress 15 - 17 . However, the passive diffusion of extracellular H 2 O 2 through the cell membrane is restricted 18 . For efficient H 2 O 2 signal transduction, there is a need for high-capacity and effective H 2 O 2 transmembrane influx, underscoring the importance of regulating cell permeability. Aquaporin 3 (AQP3) is a peroxiporin, in addition to transporting water and glycerol, which has been reported to promote cancer cell migration and invasion by transporting H 2 O 2 19 , 20 . H 2 O 2 , as a signaling molecule, promotes abnormal cell growth, metastasis, and angiogenesis 21 , 22 via the activation of pro-survival signaling pathways, loss of tumor suppressor gene function, increase of glucose metabolism, adaptation to hypoxia, and the generation of carcinogenic mutations 22 , 23 . Inhibition of AQP3 expression can reduce H 2 O 2 inflows induced by growth factors and weaken the signal cascade in cancer cells 20 . It is possible that AQP3 plays a role in facilitating the transport of NOX-derived H 2 O 2 signals stimulated by growth factors 24 , and this may, to some extent, influence the downstream intracellular effects of H 2 O 2 across biological barriers through AQP3. Currently, the mechanism by which AQP3 regulates H 2 O 2 transport to promote cervical cancer's malignant progression remains unknown and requires further investigation. In this study, we utilized cervical cancer HeLa cell line and established a xenograft tumor model in nude mice, and then we investigated the role and mechanism of AQP3 in the invasion and metastasis of cervical cancer by controlling NOX4-derived H 2 O 2 signaling. The results indicate that NOX4-derived H 2 O 2 transmembrane transport induced by growth factors is regulated by AQP3, and AQP3-dependent H 2 O 2 signaling activates intracellular Syk/PI3K/Akt signaling cascades that promote the invasion and metastasis of cervical cancer.
2. Materials and Methods 2.1 Cell lines Human cervical cancer cell lines HeLa (RRID: CVCL_0030), SiHa (RRID: CVCL_0032), C-33A (RRID: CVCL_1094), and human cervical epithelial immortalized cell line H8 (RRID: CVCL_9389) were obtained from the Pathology Laboratory of Xinjiang Medical University. All experiments were performed with mycoplasma-free cells and all cells have been authenticated using STR profiling. All cells were cultured in Dulbecco's modified Eagle's medium (DMEM, HyClone, USA), which was supplemented with 10% heat-inactivated fetal bovine serum (FBS, Gibco, USA), 100 U/mL penicillin, and 100 μg/mL streptomycin (Gibco, USA) at 37 °C in 5% CO 2 . 2.2 Western blotting Cells were lysed with radioimmunoprecipitation assay (RIPA, Solarbio, China) buffer containing protease inhibitors and phosphatase inhibitors on ice. Equal volumes of protein extracts were boiled for 5 min and separated by SDS-PAGE and then transferred onto polyvinylidene fluoride (PVDF, Sigma-Aldrich LLC.; 3010040001) mem-branes. The membranes were incubated at 4 °C with primary antibodies overnight with the following antibodies: AQP3 (#AF5222, Affinity, China), Phospho-Syk (Tyr525/526) (#2710s, Cell Signaling Technology, USA), Syk (#sc1240, Santa Cruz, USA), Phospho-PI3Kinase p85α (Y607) (#ab182651, Abcam, UK), PI3Kinase p85α (#AF6241, Affinity, China), Phospho-Akt (Ser473) (#9271s, Cell Signaling Technology, USA), Akt (#4691s, Cell Signaling Technology, USA), and mouse anti-β-Actin (#66009-1-Ig, Proteintech, China). On the next day, the membranes were washed five times with TBST, and incubated with HRP-conjugated secondary antibody for 1 h. Next, the membrane was rinsed 5 times with TBST and then visualized by the Western Bright ECL detection system (Bio-Rad, USA). 2.3 Cell Migration (Scratch Wound) Assay The migration of cells after the TGF-β1 (#100-21, Peprotech, USA) treatment was test-ed by wound scratch assay. HeLa cells were cultured as confluent monolayers in 6-well plates, synchronized in 1% FBS for 24 h, and wounded by removing a 300~500 μm-wide strip of cells across the well with a standard 10 μL pipette tip, floating cells were removed by washing with PBS. Media containing 10% FBS without or with indicated concentrations of TGF-β1, were added to the wells and incubated for an addtional 0 h, 24 h. Five representative images of the scratched areas were photographed under microscope at 0 h, 24 h, respectively. Finally, Image J software (NIH, Bethesda, MD, USA) was used to calculate the wound area. 2.4 Cell invasion assay Cell invasion assay was conducted in transwell chambers (Corning Incorporated, Corning, NY, USA). The chambers were pre-coated with 60 μL Matrigel (BD Biosciences, CA, USA) for cell invasion assay. Cells treated with different transfections were starved overnight and then seeded in the upper chamber with 1 × 10 5 cells in 100 μL of FBS-free medium. Meanwhile, 600 μL of medium containing 10% FBS was added to the lower cavity. The cells were incubated with TGF-β1 or IGF-1 (#G49-A118, MedChemExpress, USA) for 24h, cells in the upper chamber were removed, and invading cells in the lower chamber were fixed with methanol for 30 min, stained with crystal violet solution, and counted under the microscope. 2.5 Colony formation assay Cells were seeded in 6-well plates at 5× 10 2 cells per well. After 24 h of incubation, the cells were incubated with TGF-β1 for 2 weeks until a clone was visible with the naked eye. After that, the cells were fixed and stained with methanol and crystal violet for 30 min and photographed. Clone formation rates were calculated using Image J software. Each experiment was performed in triplicate. 2.6 Intracellular H 2 O 2 detection A DCFH-DA-tagged fluorescence (DCF) probe (#E004-1-1, Njjcbio, China) was used to test intracellular H 2 O 2 . A fluorescence microplate reader was used to test DCF intensity. Select HeLa cervical cancer cells, 2. 5 g /L trypsin Digestion. Termination of digestion. Cells were washed with serum-free medium and finally treated with containing Cells were resuspended in 10% neonatal fetal bovine serum medium. Adjust the whole cell number was 3 × 10 5 cells /mL, seeded in 6-well plates, and cultured for 24 h. After that, a new medium was replaced. The cells in the normal and control groups and the treatment groups were treated with DPI (1μM, #43088, Sigma-Aldrich, Germany) or NAC (5mM, #A7250, Sigma-Aldrich, Germany), and then the cells were treated with 5 ng /mL TGF-β1 for 6 h. DCF (10 μM) was added to cells in a culture dish, which were incubated for 20 minutes at 37°C and then washed three times with serum-free cell culture medium to remove extracellular DCF. A fluorescence microplate reader was used to determine fluorescence with an excitation wavelength of 488 nm and an emission wavelength of 525 nm. 2.7 Cell transfection For shAQP3 transfection, the Lentiviral vector containing the GFP and puromycin sequences was used to construct AQP3 shRNAs. Lentivirus was used to envelop AQP3 shRNA and transfected into HeLa cells for 16 hours. Subsequently, expression of the GFP gene was observed under a fluorescence microscope and screened with puromycin. The expression levels of target genes were examined by real-time PCR or western blotting as described below. 2.8 Gene expression by qRT-PCR Total RNA was extracted from HeLa cells using the RNeasy Plus Mini Kit (TransGen, China). Multiscribe Reverse Transcriptase and random primers (Applied Biosystems) were used to synthesize cDNA from 1 ug of RNA. Samples were analyzed in duplicate by qRT-PCR (7500 Fast Real-Time PCR System, Bio-Rad, USA) using Sybr green chemistry and pairs of forward and reverse primers. The expression of genes of interest was normalized to the expression of GAPDH which was not affected by genotype. Data were analyzed using the 2 -ΔΔCT method for quantification. 2.9 Xenografts and tumor growth analysis in vivo BALB/C nude mice (female, 3-5 weeks old, Slaccas, Shanghai, China) were bred under aseptic conditions and housed under a constant humidity of 60%-70% and room temperature of 18-20°C. The mice were used by the protocols approved by the Animal Care and Use Committee of Xinjiang Medical University. 4x10 6 per nude mice subcutaneously, 1x10 6 per nude mice in tail vein. The mice were then randomly assigned into the following four groups: vector, shAQP3, DPI, and NAC, and were injected subcutaneously or via the tail vein with shRNA-transfected HeLa cells. DPI (10 ng/kg), NAC (100 mg/kg), or PBS was administered by intraperitoneal injection (i.p.) once daily for 15 days according to the schedule. Mice were monitored daily to determine their body weight. To calculate the tumor volume, the following formula was used: tumor volume = [L × W 2 ]/2, where W = tumor width and L = tumor length. 2.10 Histopathology and immunostaining For histological evaluation, the dissected cervical tumors were fixed in 10% neutral-buffered formalin for approximately 1 week, embedded in paraffin, and sectioned. Samples were stained with hematoxylin and eosin (H&E). For the immunohistochemical (IHC) staining, the tissue slides were deparaffinized with xylene (Zsbio, China) and rehydrated with ethanol. After inhibiting the endogenous peroxidase using 3% H 2 O 2 in methanol, the sections were rinsed with PBS and the slides were blocked with 10% Normal Goat Serum (Zsbio, China) for 30 min at 20-25 °C, and were then incubated with primary antibodies (p-Syk, 1:100, p-p85α, 1:100, p-Akt, 1:100) overnight at 4 °C and then the secondary antibodies at room temperature for 30 min. Following DAB coloration, hematoxylin counterstaining, dehydration, and clearing in xylene, the slides were mounted. Using Image Pro Plus 6.0, staining degree was assessed by calculating the staining intensity and the positive area fraction. The staining intensity was assessed by calculating the cumulative optical density value (integrated option density, IOD). Staining degree was represented by the mean density value (mean density), which was Average Optical Density (AOD). AOD =Integral Optical Density (IOD)/ Positive area fraction. 2.11 Co-immunoprecipitation (Co-IP) After the concentration of proteins was adjusted to equal incorporation, the lysate was immunoprecipitated with respective antibodies (GFP-tag, #T0005, Affinity, China, NOX4, #BM4135, Boster Bio, USA) or IgG for 2 h and then incubated with protein A/G agarose beads (Thermo Fisher, USA) at 4 °C overnight. Next, immunoprecipitated proteins were washed with Lysis buffer and eluted from agarose beads with 4×loading buffer. Bound proteins were then denatured and separated by western blot analysis. 2.12 Statistical analysis Data are presented as mean ± standard deviation (SD). P values < 0.05 were considered to be statistically significant. Differences were analyzed using Student's t-test or one-way ANOVA. All statistical analyses were performed using SPSS 26.0 software.
3. Results 3.1. TGF-β1-induced production of exogenous H 2 O 2 is transported into the cell via AQP3 In our study, we selected HeLa cells as the target due to the high expression of AQP3 (Supplementary Table S1 , Figure S1 ). To investigate AQP3's role in cervical cancer, we employed lentiviral shRNA to knock down AQP3 in HeLa cells. This resulted in a reduction in histone expression and mRNA levels compared to the control group (Figure 1 A, B), with AQP3-49-shRNA showing the highest knockdown efficiency, leading us to select it for further experiments. To confirm the translocation of extracellular H 2 O 2 into the cell via AQP3, we applied varying concentrations of H 2 O 2 outside the cell, labeled it with the H 2 O 2 active fluorescent dye DCFH-DA, and measured the fluorescence signal using a fluorescence labeling instrument to indicate intracellular H 2 O 2 content. The results (Figure 1 C) demonstrate a significant increase in intracellular H 2 O 2 levels after H 2 O 2 addition, with the control group showing notably higher levels compared to the AQP3-49-shRNA group (Figure 1 C). Likewise, the introduction of TGF-β1, a NOX4 stimulator, markedly enhanced intracellular H 2 O 2 transport, and AQP3 knockdown inhibited the extracellular H 2 O 2 transport compared with the control group (Figure 1 D). These results indicate that AQP3 can transport extracellular H 2 O 2 into cells. Next, NOX4 inhibitor diphenyleneiodonium (DPI) and H 2 O 2 inhibitor N-Acetyl-L-cysteine (NAC) were incubated with HeLa cells, and after TGF-β1 stimulation, we found that Pretreatment with DPI and NAC significantly reduced the intracellular H 2 O 2 level induced (Figure 1 E). We therefore demonstrate that TGF-β1 stimulation induces exogenous, NOX4-produced H 2 O 2 translocation into the cell via AQP3. 3.2. Knockdown of AQP3 attenuated the effects of NOX4-derived H 2 O 2 on migration, invasion, and proliferation of HeLa cells Multiple studies have shown that AQP3 promotes cancer progression by increasing the motility and invasiveness of cancer cells 25 . Therefore, we first investigated the ability of AQP3 to promote cell migration by transporting H 2 O 2 . Cell scratch assay was used to test the wound closure rate in HeLa cells. The results showed that the knockdown of AQP3 significantly inhibited the wound healing ability (Figure 2 A). We next studied the impact of AQP3 on the invasion capability of cervical cancer cells. The results showed that the knockdown of AQP3 in HeLa cells resulted in a suppressive effect on cell invasion, even when exogenous NOX4-derived H 2 O 2 was produced by the addition of TGF-β1 (Figure 2 B). In parallel, the colony formation assay further sup-ported the oncogenic role of AQP3 (Figure 2 C). In the tumor microenvironment, various signals, including hypoxia, promote migration by enhancing cytoskeletal activity 26 . F-actin is one of the most important structural components of the cytoskeleton, whose assembly is closely related to cell migration. Accordingly, we next used phalloidin to label F-actin to observe changes in the cytoskeleton upon migration. In the presence of TGF-β1, the polarized morphology of F-actin during migration was observed at the cell edge in control cells, but not in AQP3 knockdown cells (Figure 2 D). In conclusion, these findings strongly link AQP3 to the metastatic and invasive traits of cervical cancer cells. 3.3. AQP3 regulates Syk phosphorylation and activation of PI3K/Akt signaling path-way in HeLa cells To uncover the molecular mechanism underlying the oncogenic effects of AQP3 in cervical cancer cells, we found by immunoprecipitation experiments that AQP3 inter-acts with NOX4 and may co-form complexes at the cell membrane (Figure 3 A). Next, we examined whether changes in AQP3 expression affected the PI3K/Akt-related signaling pathways, given previous suggestions of AQP3's role in cancer progression through this pathway 27 , 28 . As shown in Figure 3 B, the knockdown of AQP3 in HeLa cells with different concentrations of TGF-β1 reduced the levels of phosphorylated PI3K p85α and Akt, while the total protein expression level was almost unchanged. Given that Spleen Tyrosine Kinase (Syk) is known to phosphorylate the PI3K signaling pathway by initiating BCR signaling 29 , we investigated Syk's expression. Western blot results (Figure 3 B) revealed increased phosphorylation of Syk with rising TGF-β1 concentrations, and this effect was significantly suppressed by AQP3 knockdown. Subsequently, we observed that the addition of exogenous H 2 O 2 could activate PI3K signaling pathway in a time-dependent manner and that AQP3 knockdown cells similarly attenuated the activation of phosphorylated proteins (Figure 3 C). These findings suggest that AQP3 may exert its tumorigenic function in cervical cancer cells by activating PI3K/Akt-related signaling pathway. 3.4. Increased H 2 O 2 after TGF-β1 stimulation promotes phosphorylation of Syk, PI3K p85α and Akt To investigate whether NOX4-derived H 2 O 2 promotes PI3K/Akt phosphorylation, HeLa cells were pretreated. Cells incubated with DPI and NAC showed a diminished response to TGF-β1 and decreased phosphorylation compared with controls (Figure 4 A). This suggests that H 2 O 2 produced by NOX4 is involved in the expression of PI3K-related signaling pathway under TGF-β1 stimulation. AQP3 knockdown cells were treated with TGF-β1 alone, H 2 O 2 alone or in combination with TGF-β1. The results demonstrated that H 2 O 2 alone induced Syk, PI3K p85α, and Akt phosphorylation in the tested cell line. Furthermore, adding H 2 O 2 to TGF-β1 increased the H 2 O 2 -induced phosphorylation of Syk, PI3K p85α, and Akt (Figure 4 B). We next investigated whether insulin like growth factor-1 (IGF-1), an agonist of PI3K, could reverse the effects of AQP3 knockdown on the phosphorylation of PI3K/Akt pathway. After 100 ng/ml IGF-1 stimulation was given to AQP3 knocked down cells, and Western blot was performed. The results showed that IGF-1 significantly increased PI3K p85α and Akt phosphorylation in AQP3-49-shRNA HeLa cells, while the total protein was relatively unchanged (Figure 4 C). It is suggested that knockdown of AQP3 inhibits the activation of PI3K/Akt signaling pathway in cervical cancer HeLa cells, and this effect can be partially reversed by PI3K agonists. We then looked at whether IGF-1 could induce migratory and invasive behavior. As supported, our data showed that although AQP3 knockdown reduced H 2 O 2 signaling pathway conduction, IGF-1 enhanced cell wound healing abilities and invasion abilities (Supplementary Figure S2 A, B). Similarly, a polarized morphology of F-actin during migration was observed at the edge of IGF-1-treated cells (Supplementary Figure S2 C). These results confirm that AQP3 regulates downstream signaling through NOX4-derived H 2 O 2 induced by TGF-β1 stimulation. Additionally, the PI3K agonist (IGF-1) partially reversed the inhibition of the PI3K/Akt pathway caused by AQP3 knockdown, providing further evidence of AQP3's regulatory role in NOX4-derived H 2 O 2 signaling. This, in turn, leads to the activation of the PI3K/Akt pathway, contributing to the malignant progression of cervical cancer. 3.5. Knockdown of AQP3 inhibited the formation of subcutaneous xenograft tumors in nude mice Our previous research found that AQP3 expression in carcinoma of the cervix significantly increased in advanced stage disease, and patients with deeper tumor infiltration, lymph node metastases or larger tumor volume, which suggests AQP3 may participate in the initiation and progression of cervical carcinoma by promoting tumor growth, invasion or lymph node metastasis 30 . As mentioned earlier, the knockdown of AQP3 showed a protective effect against cervical cancer progression. Next, AQP3 knockdown HeLa cells or control cells were injected into nude mice to establish a subcutaneous xenograft model (Figure 5 A). The control model was treated with PBS, DPI (10 ng/kg, i.p, 15 days) or NAC (100 mg/kg, i.p, 15 days). It was found that DPI or NAC treatment slowed tumor growth compared to vehicle (Figure 5 B), and there was no significant reduction in the body weight (Supplementary Figure S3 A). At the same time, compared with the control group, the knockdown of AQP3 also slowed the growth of subcutaneous tumors in nude mice (Figure 5 C), while no significant difference in body weight was observed (Supplementary Figure S3 B). After observing the Hematoxylin and Eosin staining (H.E.) sections of the tumors, it was found that the necrotic area in both PBS (vehicle) group and the control was larger than that of both DPI or NAC-treated group and the AQP3 knockdown. It indicates that they all have relatively high degrees of malignancy (Figure 5 D, E, top). Since advanced cervical cancer is prone to lymph node metastasis, we also examined lymph node metastasis in nude mice. In our results, the rate of lymph node metastasis was reduced in DPI- or NAC-treated groups, as well as in the AQP3 knockdown nude mice (Figure 5 D, E, bottom, Supplementary Figure S3 C). Then Immunohistochemistry (IHC) of xenograft tumors was conducted to assess p-Syk, p-PI3K p85α, and p-Akt in vivo . The results showed that treatment with DPI or NAC, as well as knockdown of AQP3, significantly inhibited the protein expression of p-Syk, p-PI3K p85α or p-Akt (Figure 5 F, G, Supplementary Figure S3 D). We also evaluated the effect of DPI or NAC on the above protein expression by Western blot and found that AQP3, p-Syk, p-PI3K p85 or p-Akt levels were down-regulated (Figure 5 H, Supplementary Figure S3 E). Western blot results also showed that AQP3 knockdown decreased the phosphorylation of the key proteins (Figure 5 I, Supplementary Figure S3 F). 3.6. AQP3 transport of H 2 O 2 enhances in vivo metastasis in nude mice To explore the effect of AQP3 on the transcellular transport of NOX4-produced H 2 O 2 in nude mice in vivo , we constructed a tail vein metastasis model (Figure 6 A). For tail vein-injected nude mice, no significant difference in body weight could be observed prior to execution (Supplementary Figure S4 A, B). We euthanized the mice 15 days after DPI or NAC application and then removed the lungs and liver to observe metastasis. H.E. staining results showed that DPI or NAC could significantly attenuate the lung metastasis from a tail vein injection of HeLa cells (Figure 6 B). The knock-down of AQP3 also had the same effect, and the control nude mice had more lung metastatic foci, no pulmonary metastases were detected in AQP3 knockdown group. (Figure 6 C). In addition, we found the presence of liver metastases in or near the hepatic blood sinusoids. However, no significant difference was found in the liver metastasis among the groups (Supplementary Figure S4 C, D). Our data confirmed the role of AQP3 in promoting HeLa cell metastasis in vivo . In summary, the reduction of NOX4-derived H 2 O 2 by DPI or NAC, or the attenuation of AQP3 as the ROS transport channel, led to a decrease in tumor cell metastasis to the lungs. AQP3 was demonstrated to have a significant oncogenic role in ROS regulation.
4. Discussion Inflammatory factors are essential components of the tumor microenvironment. Inflammation can not only promote cell proliferation and metastasis through epigenetics and abnormal gene expression, angiogenesis, etc. but also release lots of reactive oxygen species (ROS) that promote cancer evolution 31 . Hence, ROS play a crucial role rather than serving merely as by-products of REDOX reactions induced by oxidative stress. ROS are a class of highly reactive free radicals, such as hydroxyl radical (•OH), the superoxide radical (O2•-), and hydrogen peroxide (H 2 O 2 ) 32 , 33 . Superoxide can rapidly and spontaneously convert to H 2 O 2 , which serves as a signaling molecule, leading to the abnormal activation of various signaling pathways and contributing to cancer progression. An important clinicopathological feature of cervical cancer is often accompanied by persistent chronic inflammation. Considering the significant role of ROS in pathophysiology, regulating the entry of extracellular ROS produced by NOX4 into cells to promote pro-cancer signaling in cervical cancer has become a pressing research concern. Several studies have shown that certain AQPs, such as AQP3, AQP5, AQP8, AQP9, can transport various polar small molecules, including the regulation of extracellular H 2 O 2 transport, making them known as peroxiporins 34 . Among them, AQP3 is well known to transport H 2 O 2 in different studies. In this preliminary study, we found that the expression of AQP3 in HeLa cell line was highest among the 4 cell lines representing the main molecular subtypes of cervical cancer. Knock-down of AQP3 in HeLa cell line attenuated the entry of NOX4-derived H 2 O 2 into the cells induced by TGF-β1 and the migration and invasive capacity of HeLa cells, and which was confirmed by the nude mouse xenograft tumor model. An important problem is how AQP3 regulates NOX4-derived H 2 O 2 ? As far as the literature we have reviewed are concerned, this issue is rarely reported. Here, co-localization of AQP3 and NOX4 was observed through CO-IP on HeLa cell membrane, which suggested that there was an interaction between AQP3 and NOX4 and that TGF-β1 triggered NOX4 to produce more H 2 O 2 , a second messenger, flowing into the cell upon opening of AQP3 channel. And we explored whether the onset of AQP3 transporting H 2 O 2 was related to NOX4, and verified the effect of AQP3 on promoting the invasion and metastasis after NOX4 activation in HeLa cells. AQP3 has been studied to influence cancer progression by transporting H 2 O 2 and regulating intracellular ROS levels. AQP3 promotes malignant transformation and stimulates the proliferation and metastasis of lung adenocarcinoma cells by prompting the uptake of H 2 O 2 to further oxidize and in-activate PTEN and inhibit autophagy 35 . Additionally, silencing of AQP3 reduces MMP expression in gastric cancer cells and attenuates invasion and metastasis of gastric cancer cells through a PI3K/Akt-dependent manner 27 . In breast cancer AQP3 has also been shown to regulate oxidative responses and PI3K/Akt activation, affecting its progression 28 , 36 . These studies all support our view that H 2 O 2 acts as a second messenger to promote the progression of cervical cancer through the mediation of AQP3. H 2 O 2 signal is similar to other signal transduction, characterized by a series of phosphorylation events that occur locally in the cell. Syk (spleen tyrosine kinase), a non-receptor tyrosine kinase that mediates signaling downstream of a variety of transmembrane receptors, has been detected to be highly associated with malignant tumors including, but not limited to, lymphoid malignancies, colon cancer, non-small cell lung cancer, breast cancer, and ovarian cancer 29 , 37 - 39 . In our study, Syk phosphorylation was significantly enhanced by the entry of extracellular H 2 O 2 . In vivo and in vitro assays, Syk phosphorylation was reduced after the treatment of NOX4 inhibitor DPI and H 2 O 2 inhibitor NAC. Previous studies have shown that 15(S)-HETE induces ROS production in XO-dependent activation of NOX, which leads to the activation of non-receptor tyrosine kinases (NRTK) such as Syk and Pyk2 in monocytes 40 . Subsequent experiments demonstrated that ROS production enhanced atherogenesis by Syk and Pyk2-mediated STAT1 activation and CD36 expression 41 . Coincidentally, in a study of periodontitis, the authors demonstrated that Trem2 increases intracellular ROS levels and mediates osteoclast differentiation through a SYK-dependent signaling cascade 42 . Thus, Syk may function as a downstream molecule in response to H 2 O 2 signaling. Syk, as a downstream effector shared by multiple oncogenic receptors, mediates downstream signal transduction of multiple transmembrane receptors 43 . We next explored the downstream pathways of H 2 O 2 /Syk signaling in more depth. In our experiments, when Syk was phosphorylated due to increased H 2 O 2 entering into the cell, the PI3K/Akt signaling pathway was activated, promoting invasion and metastasis in HeLa. This result was also confirmed in vivo animal experiments, where nude mice treated with more H 2 O 2 being translocated into the cells exhibited worse malignancy manifestations, like enlarged volume of graft tumors or increased number of metastatic foci. In other studies, it has been found that there are several other pathways involved in the delivery of Syk signaling. Both the chemical inhibition and molecular depletion of Syk induced the pro-apoptotic HRK protein via a PI3K/Akt-dependent mechanism in BCR-dependent DLBCL cell lines and primary tumors with low baseline NF-κB activity 44 . Another finding showed that phosphorylation of CD19-Akt was only observed in the presence of Syk-wild-type but not Syk K402A -kinase-dead form 29 . It is, therefore, possible that the PI3K/Akt pathway is located in the downstream signaling to function in response to Syk. PI3K/Akt is also a classic oncogenic pathway 45 , which was activated in the H 2 O 2 /Syk signaling pathway. We validated the pro-cancer role of the H 2 O 2 /Syk/PI3K signaling axis. Our data demonstrates that inflammatory mediator TGF-β1 stimulates cervical cancer cells to accelerate metabolism, prompting NOX4 in the cell membrane to produce large amounts of ROS, which are converted to relatively stable H 2 O 2 . AQP3 interacts with NOX4 in the cell membrane, and a large amount of H 2 O 2 flows into the cell through the open AQP3 channel, which acts as a signaling molecule to activate the Syk/PI3K/Akt pathway, promoting the invasion and metastasis of cervical cancer. Transcription of the Syk gene produces two selective splice products: the full-length Syk, termed Syk (L), and the shorter gene product, SykB, also known as Syk (S). Current studies have shown that Syk (L) and Syk (S) have different effects on the growth characteristics of cancer cells. The ability of Syk to act as a promoter or re-pressor of malignant cell growth appears to be highly dependent on the cell type and its stage of differentiation, the relative levels of the two Syk isoforms expressed, etc. 46 . Therefore, the role of Syk on the proliferation and migration of cancer cells requires further study. Taken together, our results suggest that AQP3 promotes cervical cancer invasion and metastasis by regulating NOX4-derived H 2 O 2 transport into cancer cells, thereby activating the Syk/PI3K/Akt signaling pathway. Inhibition of the H 2 O 2 /Syk/PI3K/Akt signaling axis may be a potentially effective way to treat cervical cancer, and AQP3 could be a potentially effective target for the treatment of cervical cancer. In addition, human cervical cancer tissue samples were not included in our study for the time being, which warrants further subsequent studies.
# Co-authors, they have equal contributions to this paper. Competing Interests: The authors have declared that no competing interest exists. Unrestrained chronic inflammation leads to the abnormal activity of NOX4 and the subsequent production of excessive hydrogen peroxide (H 2 O 2 ). Excessive H 2 O 2 signaling triggered by prolonged inflammation is thought to be one of the important reasons for the progression of some types of cancer including cervical cancer. Aquaporin 3 (AQP3) is a member of the water channel protein family, and it remains unknown whether AQP3 can regulate the transmembrane transport of nicotinamide adenine dinucleotide phosphate (NADPH) oxidase 4 (NOX4)-derived H 2 O 2 induced by the stimulation of inflammatory factors to facilitate the malignant progression in cervical cancer. In this study, cervical cancer HeLa cell line was respectively treated with diphenyleneiodonium (DPI), N-Acetylcysteine (NAC) or lentivirus-shRNA- AQP3. Plate cloning, cell migration or transwell invasion assays, etc. were performed to detect the invasive and migration ability of the cells. Western blot and CO-IP were used to analyze the mechanism of AQP3 regulating H 2 O 2 conduction. Finally, in vivo assays were performed for validation in nude mice. AQP3 Knockdown, DPI or NAC treatments all reduced intracellular H 2 O 2 influx, and the activation of Syk/PI3K/Akt signal axis was inhibited, the migration and invasive ability of the cells was attenuated. In vivo assays confirmed that the excessive H 2 O 2 transport through AQP3 enhanced the infiltration and metastasis of cervical cancer. These results suggest that AQP3 activates H 2 O 2 /Syk/PI3K/Akt signaling axis through regulating NOX4-derived H 2 O 2 transport to contribute to the progression of cervical cancer, and AQP3 may be a potential target for the clinical treatment of advanced cervical cancer.
Supplementary Material
We appreciate Dr. Shayahati Bieerkehazhi (UT Health Science Center at Houston, University of Texas, USA) for language edition of the manuscript. Funding This work was supported by National Natural Science Foundation of China (81660427, to Yonghua Shi) and Natural Science Foundation Project of Xinjiang Autonomous Region, China (2021D01A47, to Yonghua Shi). Author Contributions Yonghua Shi developed the study concept and design. Qixin Wang, Bingjie Lin, Hongjian Wei, Xin Wang, Xiaojing Nie performed the experiments and collected the data. Qixin Wang, Bingjie Lin, Hongjian Wei conducted animal experiments. Qixin Wang, Bingjie Lin co-wrote the paper. Qixin Wang and Yonghua Shi contributed to the pathological analysis. Yonghua Shi supervised the research, the writing and revision of the paper. All authors have read and agreed to the published version of the manuscript. Data Availability Statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Institutional Review Board Statement The animal study protocol was approved by the Ethics Committee of Xinjiang Medical University. Abbreviations Protein Kinase B Aquaporin 3 diphenyleneiodonium Epithelial-mesenchymal transition Green Fluorescent Protein Tag hydrogen peroxide Hematoxylin and Eosin staining Insulin-like growth factor 1 Immunohistochemistry N-Acetylcysteine NADPH oxidase 4 Phosphoinositide 3-kinase reactive oxygen species Spleen Tyrosine Kinase transforming growth factor-β1 tumor microenvironment
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1124-1137
oa_package/68/d5/PMC10788729.tar.gz
PMC10788730
0
Introduction The tumor vasculature is an important component of the tumor microenvironment providing nutrients essential for tumor genesis and development. The hyper-proliferation and migration of vascular endothelial cells (VECs) in tumors results in the formation of chaotic and destabilized vasculature, which leads to limited nutrient delivery, hypoxia, and acidosis, and promotes tumor growth, metastasis, and recurrence 1 , 2 . Therefore, inhibiting abnormal activity of VECs is a promising anticancer strategy. Otherwise, tumor vessels are abnormal, both structurally and functionally. The inner wall of blood vessels is composed of endothelial cells (ECs) interconnected by junctional molecules. Of which, adhesive molecules expressed at the basolateral surface of activated VECs regulate intravasation of cancer cells 3 . Meanwhile, tight junction proteins (TJs) expressed at the tight junction between VECs alter cancer cell migration by controlling the permeability of endothelial monolayers 4 . Thus, adhesive molecules and TJs expression of VECs play important roles in tumor trans-endothelial migration. Tubeimoside-1 (TBMS1) is a major active ingredient of the Chinese medicinal herb Bolbostemma paniculatum (Maxim) Franquet (Cucurbitaceae) . TBMS1 exhibits anti-tumor activity in a variety of tumor contexts, including lung cancer, cervical cancer, and ovarian cancer with low toxicity, and usually exerts its anticancer action through the toxicity on cancer cells, including inhibiting proliferation, inducing apoptosis, autophagy, and cycle arrest 5 , 6 . Although TBMS1 has been reported to inhibit tumor angiogenesis by regulating of angiogenesis-related growth factors and their receptors 7 , the anti-tumor microvessels of action of TBMS1 remains to be completely elucidated. The main purpose of this study was to investigate the mechanism of action and impact of TBMS1 on tumor microvessels, we performed in vivo mouse tumor models and in vitro cell activity assay and then analyzed and predicted that potential targets and pathways of TBMS1 using network pharmacology technology. Here, we found that TBMS1 suppressed tumor microvessel density in tumor models, and acted as a potent regulator of vascular activity that targets VECs to counteract abnormal tumor adhesion and vascular permeability. In conclusion, TBMS-1 acts as a vasoactive drug to preserve vascular integrity to inhibit tumor cells trans-endothelial migration.
Materials and methods Cell lines and reagents Human ovarian cancer cell SKOV3, and human umbilical vein endothelial cell HUVEC were maintained in RPMI 1640 medium (Gibco, Waltham, MA, USA), supplemented with 10% fetal bovine serum, 100 U/mL penicillin and 100 mg/mL streptomycin, and kept at 37oC in a humidified atmosphere containing 5% CO 2 . All cell lines purchased from Shanghai Cell Biology Institute (Shanghai, China). Tubeimoside-1 (TBMS1) was purchased from Yuanye Bio-technology Co. Ltd. (Shanghai, China). Xenograft tumor models Five-week-old female nude BALB/c mice, were purchased from Beijing HFK Bioscience Co., Ltd. (Beijing, PR China). SKOV3 cells (5×10 6 per mouse) were subcutaneously ( s.c. ) injected into the right flank of nude mice. When the tumor volumes reached about 100 mm 3 , mice were randomized to two groups (n=6) and treated with TBSM-1 (10 mg/kg orally daily for consecutive 14 days) or normal saline (NS). After 14 days, the mice were euthanized. Animal experimental procedures were conducted in accordance with guidelines for experimental animals and approved by the Animal Ethics Committee of Shandong University. Immunohistochemistry (IHC) IHC staining with hematoxylin and eosin (H&E), Ki67, and CD31 (Abcam, Cambridge, UK) of xenograft tumor tissue sections was performed using a DAB substrate kit (Maxin, Fuzhou, China), according to the manufacturer's instructions. MTT assay Cells were seeded in a 96-well plate at a concentration of 5×10 3 per well and cultured with different concentrations of TBMS1 for 1-3 days. The cell number was evaluated using an MTT assay (Beyotime, Beijing, China). The absorbance value (OD 490 ) was obtained using a microplate reader (Synergy 2; BioTek, Winooski, Vermont, USA) at a wavelength of 490 nm. Flow cytometry assay Cells (2×10 5 per well) were plated into 6-well plate and cultured with different concentrations of TBMS1 for 24 h. Cells were harvested and washed in PBS (phosphate buffered solution). For apoptosis analysis, cells were tested using an apoptosis kit (BestBio, Shanghai, China) and analyzed by flow cytometry (BD Biosciences, CA, USA). For cell cycle analysis, cells were fixed in 70% ethanol at 4oC overnight. After washing with phosphate-buffered saline (PBS), the fixed cells were incubated in PBS containing 20 μg/mL of propidium iodide (PI), 200 μg/mL of RNase A, and 0.1% Triton X-100 (BD Biosciences, CA, USA) at 37oC for 30 minutes. The stained cells were then analyzed for cell cycle distribution using a FACSCalibur flow cytometer (BD Biosciences, CA, USA). Quantitative real-time (qRT) PCR Total RNA was extracted using TRIzol reagent (Invitrogen, Carlsbad, California, USA), according to the manufacturer's instructions, and cDNA was synthesized by PrimeScriptTM RT Reagent Kit (TaKaRa Biotechnology, Co., Ltd., Dalian, China). Next, SYBR Green PCR Master Mix (TaKaRa Biotechnology) was used to perform real-time-PCR (qPCR). All reactions were carried out on an Applied Biosystems 7500 Real-Time PCR System (Thermo Fisher Scientific, Inc.). Relative gene expression was analyzed using the 2 -ΔΔCt method; PCR primer sequences are provided in Table S1 . Tube formation assay The tube formation assay was performed using HUVEC cells, as previously described 8 . Briefly, 96-well plates pre-coated with 50 μL growth factor reduced (GFR) Matrigel basement membrane matrix (BD Biosciences, CA, USA) were incubated at 37°C for 1 h to allow gel formation. HUVEC cells (3×10 5 per well) were plated into the plate. Tube formation was assessed after 6 h and photographs were taken using an inverted fluorescence microscope (Olympus, Japan). Wound-healing assays The wound healing assay was performed according to a previously described methodology 9 to test cell migration. Cells (1×10 6 ) were spread onto 6-well plates marked at the bottom to the cells reached more than 95% confluency. A 100 μL pipette tip was used to scratch the cells along the marks. The cell debris were washed off with PBS buffer and fresh serum-free medium was added into the plate. Then, cell migration was observed and imaged at 0 and 24 h by using an inverted microscope (Zeiss Axioskop 2, German). Transwell assay The invasion assay was performed using transwell chambers coated with fibronectin and matrigel (8.0 μm pore size; Millipore, MA). Briefly, HUVEC cells (1×10 5 per well) were added to the upper chamber of transwell filters in a 24-well plate. RPMI 1640 with 10% FBS was added to the lower chamber. Cells were treated with PBS or TBMS1 and incubated for 24 h. Cells that migrated to the bottom of the filter were stained with crystal violet (Solarbio, Beijing, China). Endothelial adhesion assay HUVEC cells (2×10 5 per well) were seeded in a 12-well plate and pretreated with TBMS1 (10 μg/mL) for 24 h prior to treatment with TNF-α (10 ng/mL) for an additional 6 h. Next, CFSE-labeled SKOV3 or B16 cells (1×10 5 per well) were added on the top of the VEC monolayers for 2 h. After 2 h, wells were washed gently 3 times with PBS to remove non-adherent cells and adherent cells were photographed with a fluorescent microscope; a minimum of 5 fields/well were quantified. Endothelial permeability assay HUVEC cells (2×10 4 per well) were added to the upper chamber of a transwell insert (0.4 μm pore size; Millipore) in a 24-well plate. Cells were allowed to reach confluence, and were then treated with PBS or TBMS1 for 24 h. Rhodamine-dextran (10 mg/mL, average mw~70,000; Sigma, USA) was then added to the top well. The appearance of rhodamine-dextran in the bottom well was monitored during a 1 h time course. The absorbance at 590 nm at each time point was recorded. Western blot analysis Proteins from the HUVEC cell lines were extracted using RIPA buffer (BestBio, Shanghai, China) containing protease inhibitor cocktail (Roche Diagnostics). The proteins were then separated by 10% SDS-PAGE and transferred onto PVDF membranes, which were blotted with primary antibodies. Rabbit polyclonal antibodies against p-Akt (Ser473), Akt, p-Erk1/2 (Thr202/Tyr204), Erk1/2, p-Stat3 (Tyr705), p-Stat3 (Ser727), Stat3, p-NFκB (Ser536), NF-κB, and GAPDH were purchased from Cell Signaling technology (Beverly, MA, USA). Membranes were then stained with the appropriate secondary antibody conjugated with HRP, then visualized using enhanced chemiluminescence (Millipore, Billerica, MA, USA), and finally analyzed by ImageLab software (Version 3.0, Bio-Rad). Target prediction SMILES of Tubeimoside-1 were obtained from Pubchem chemical information database ( https://pubchem.ncbi.nlm.nih.gov/ ) and imported into Swiss Target Prediction database ( http://www.swisstargetprediction.ch/ ), limited species for "Homo sapiens" to predict its related targets. Tubeimoside-1-related targets were also predicted using ChEMB ( https://www.ebi.ac.uk/chembl/ ) and Genecards database ( https://www.genecards.org/ ). The intersection of the above databases and the deletion of repeated targets were considered as drug targets. Moreover, the GeneCards ( https://www.genecards.org/ ) were searched with “tumor microvessels” as the keyword to obtained the candidate targets of disease. Network construction All targets were imported into UniProtKB ( http://www.uniprot.org/ ) to unify the target names. VENNY2.1 ( https://bioinfogp.cnb.csic.es/tools/venny/ ) was used to obtain the intersection targets of the compound and disease. The intersecting targets were imported into STRING ( https://cn.string-db.org/cgi/input.pl ), “Homo sapiens” was selected as the species, and medium confidence >0.4 was selected as the minimum interaction threshold, unlinked targets were hidden, and other parameters were kept at default settings. The tsv file was saved after updating and imported into Cytoscape 3.9.1. Then, network analysis was performed, and the clusters with high correlation were calculated using the CytoNCA plugin. GO and KEGG pathway enrichment analysis The above intersection targets were imported into the Metascape database ( https://metascape.org/ ) and the DAVID v6.8 database ( https://david.ncifdrf.gov/ ), and the species was set as “Homo Sapiens” to conduct the enrichment analysis of biological functions and signaling pathways. The KEGG pathways of p < 0.01 were considered significant, and the results of enrichment analysis were visualized using the microbiology online mapping platform ( http://www.bioinformatics.com.cn/ ). Statistical analysis Each experiment was performed in triplicate, and data are expressed as mean ± SEM, unless otherwise stated. Student's t-test was used to compare mean values. If data were not normally distributed or if they had unequal variances, the Mann-Whitney U test was used for comparison of two groups. All analyses were conducted using GraphPad Prism software 6 (GraphPad Software Inc., La Jolla, CA). A p-value of p < 0.05 was considered to indicate a statistically significant difference. * p <0.05, ** p <0.01, and *** p <0.001.
Results TBSM1 inhibits tumor microvessel density In order to study the anti- tumor microvessels of TBSM1 in vivo , the human ovarian cancer cell SKOV3 was subcutaneously injected into the flanks of female nude mice. As the tumors were established, the mice were randomized to receive TBSM1 treatment for 2 weeks. Daily administration of 10 mg/kg/day TBSM1 significantly reduced the volume of the developing tumors (Figure 1 A). After therapy, the tumor weight in the TBMS1 and NS control groups was 323 ± 157 mg and 694 ± 304 mg, respectively (Figure 1 B). Immunohistochemical analyses of the excised tumors revealed a lower density of Ki-67-stained proliferating cells in the tumors from the TBMS1 group compared to those from the vehicle-treated animals ( P <0.05) (Figure 1 C), indicating that TBSM1 efficiently suppressed tumor growth. Furthermore, the CD31-positive microvessels in the tumors from the TBMS1 group were significantly reduced comparing with vehicle-treated group ( P <0.01) (Figure 1 C), indicating that TBSM1 efficiently suppressed tumor microvessel density (MVD). TBSM1 inhibits proangiogenic properties of vascular endothelial cells In solid tumors, resident vascular endothelial cells (VECs) possess high proangiogenic properties, including increased proliferation and migration 10 . To demonstrate the anti-angiogenic effects of TBMS1 on vascular endothelial cell growth, we analyzed the proliferation of HUVEC cells treated with TBMS1. The half-maximal inhibitory concentration (IC 50 ) of TBMS1 was higher (17.44 ± 0.75 μg/mL) for ovarian cancer cells than HUVEC cells (9.07 ± 0.95 μg/mL) (Figure 2 A), and the anti-proliferative activity of TBMS1 at 10 μg/mL against SKOV3 was less than 15% (data not shown). These results suggested that endothelial cells are more sensitive to TBSM1 treatment than cancer cells. Correspondingly, the proliferation of HUVEC cells treated with TBMS1 was suppressed in a time- and dose-dependent manner (Figure 2 B). Inhibition of cell growth is usually associated with cell cycle and apoptosis, and flow cytometric analyses revealed that HUVECs treated with TBMS1 showed a dramatic increase in cell cycle arrest and apoptosis (Figure 2 C, D). These results indicate that TBMS1 influences vascular endothelial cell proliferation by causing cell cycle arrest and inducing apoptosis. Next, we investigated the effects of TBMS1 on the HUVEC cell migration, using wound healing assays and transwell invasion assays. Treatment of HUVEC cells with TBMS1 for 24 h caused an inhibition of cell migration and invasion compared with control treated cells (Figure 3 A and B). Furthermore, an in vitro Matrigel model was employed to study tube formation, and TBMS1 treatment resulted in a significant decrease in the numbers of capillary networks (Figure 3 C). These data demonstrate that TBMS1 directly inhibits vascular endothelial cell proliferation, migration, and tube formation potential in vitro . TBSM1 suppresses vascular permeability to inhibit trans-endothelial metastasis Cancer cells adhering to VECs and subsequent migrating trans- endothelial are considered to be a key step of tumor metastasis, while endothelial barrier posed of many adhesion molecules and tight junction (TJs) proteins plays important roles in this process 11 , 12 . We queried whether TBMS1 treatment rendered VECs to be less adhesive to cancer cells. Indeed, TBMS1 reduced the number of cancer cells adhering to VECs that were pre-activated with tumor necrosis factor alpha (TNF-α) (Figure 4 A), which regulated the expression of adhesive molecules through NF-κB signaling 13 . Notably, TBMS1-treated VECs expressed lower levels of the adhesion molecules VCAM-1 and ICAM-1 (Figure 4 B), which were involved in cancer cell intra/extravasation 14 . Additionally, disorganized tumor vessels express lower levels of junction proteins, which disrupts vascular integrity and facilitates tumor metastasis 15 . Therefore, we performed an in vitro vascular permeability assay using rhodamine-labeled dextran, as described previously 16 . Treatment of the VEC monolayer with TBMS1 reduced the passage of dextran from the top to the bottom wells (Figure 4 C). Simultaneously, there was a marked increase of VE-cadherin and a moderate increase of the TJ proteins zonula occludens-1 (ZO-1) and claudin-5 induced by TBMS1 (Figure 4 D). These results suggest that TBMS1 suppresses trans-endothelial metastasis by reducing tumor adhesion and restoring the integrity of the endothelial cell barrier. The pharmacological mechanisms of TBSM1 against tumor angiogenesis via network pharmacology Through target prediction and database search, a total of 340 potential action targets of TBMS1 and 1,583 tumor microvessels-related targets were obtained. Moreover, 155 intersection targets were screened using online drawing to make a venn diagram (Figure 5 A; Table S2 ). Sequentially, we used STRING to construct the intersection targets of the PPI network, and the results were analyzed using Cytoscape software (version 3.8.0) (Figure 5 B). The results showed that 151 nodes and 1819 edges were obtained, and average node degree was 23.5. Furthermore, the top 9 interacting hub genes including AKT1, VEGFA, EGFR, TP53, CASP3, JUN, MAPK3, STAT3 and ESR1, were obtained based on three core networks using the CtyoNCA plugin of Cytoscape (Figure 5 C), which was identified as key hub proteins and may play an important role in the efficacy of TBMS1 in the treatment of tumor microvessels. The biological functions (GO-Biological Process, GO-BP) analysis using Metascape indicated that the above targets were mainly related to pathways in cancer, VEGFA-VEGFR2 signaling pathways, and positive regulation of locomotion, phosphorylation and cell death (Figure 5 D), and the results of the KEGG analysis using DAVID indicated that the above targets were closely associated with mechanisms such as the PI3K-Akt signaling pathway, microRNA in cancers, focal adhesion, and apopotosis (Figure 5 E). In the above “virtual studies” results, as VEGFA-VEGFR2 signaling pathways has been reported to participation in antiangiogenic effects of TBMS1 7 , we focus on “locomotion” associated with tumor microvessels, such as tumor cell adhesion and trans-endothelial migration which were reported to be regulated by the hub genes AKT1, MAPK3 and STAT3 17 - 19 . TBSM1 suppresses multiple signaling pathways associated with trans-endothelial metastasis Based on network pharmacological data, the protein phosphorylation of Akt, Erk1 and Stat3 coded by the interacting hub genes AKT1, MAPK3 and STAT3 were detected by western blotting. In our research, TBMS1 treatment significantly reduced the activation of Akt and Stat3 both at tyrosine 705 and serine 727, and reduced activation of Eek1/2 in the MAPK pathway (Figure 6 A). Nuclear factor κB (NFκB) signaling activation in VEC was related to cancer cell adhesion 13 , 14 , and was found to be reduced by TBMS1 treatment (Figure 6 B). These results suggest that TBMS1 inhibited abnormal activity and function of tumor endothelial cell by interfering in the activation of multiple signaling pathways that are critical for angiogenesis. The phosphorylation reduction of Akt and Erk is dose-dependent while Stat3 and p65 are not, perhaps suggest that Akt and Erk singling pathway were more sensitive with TBMS1 treatment than Stat3 and p65 singling proteins.
Discussion Tumor vessels are fundamental for tumor progression and metastatic dissemination 20 . Inhibiting angiogenesis is a therapeutic strategy for solid tumors, and strategies including monoclonal antibodies targeting VEGF or VEGFR and small-molecule tyrosine kinase inhibitors that inhibit multiple angiogenic and proliferative pathways are approved for clinical use in a variety of cancer contexts 21 , 22 . Components of traditional Chinese medicine (TCM), have been shown to have strong anti-angiogenic activity 5 , 23 , 24 . TBMS1 is an active compound form of the Chinese medicinal herb Bolbostemma paniculatum , and has been shown to have anti-cancer activity including ovarian cancer 25 . Although it's reported that TBMS1 inhibited tumor angiogenesis by regulating of angiogenesis-related growth factors and their receptors 7 , the anti-tumor vessels effectiveness of TBMS1 has not been thoroughly investigated. In the present study, we used network pharmacology and experimental validation to reveal that TBMS1 demonstrated novel anti-tumor microvessels potential by inhibiting tumor adhesion and vascular permeability. The formation of tumor microvessels is a complex process and depends on endothelial cell migration, proliferation, and capillary tube formation 26 . We firstly explored the effects and possible mechanisms of TBMS1 on vascular endothelial cells (VECs), using in vivo and in vitro models. In a SKOV3 ovarian cancer model, tumor angiogenesis suppression was more significantly than growth retardation after systemic treatment of TBMS1 (Figure 1 ). Meanwhile, we cultured VECs with TBMS1, and found that TBMS1 exhibited anti-angiogenic activity by inhibiting the pro-angiogenic properties of VECs, including blockade of hyper-proliferation via cell cycle arrest and increased apoptosis, decreased migration, and reduced formation of capillary structures (Figure 2 - 3 ). Next, the PPI network analysis showed that the hub targets of TBMS1 were mainly protein kinase family members (AKT, VEGFA, EGFR, JUN, MAPK3 and STAT3). The enrichment analyses for intersection targets showed several essential biological processes and signaling pathways in tumor microvessels, including VEGFA-VEGFR2 signaling pathways, regulation of locomotion and phosphorylation, and focal adhesion, underscoring typical “multi-ingredient, multi-target, and multi-function” pharmacological characteristics of TBMS1. Based on the integrative analysis of hub targets and GO-BP/KEGG analysis, we focused on the process of adhesion and the subsequent trans-endothelial migration (Figure 4 ). The surface of endothelial are covered with adhesion molecules, including ICAM-1 and VCAM-1, which mediate the adhesion and extravasation of cancer cells 11 . In this study, we found that TBMS1 treatment lowered VEC expression of cancer adhesion molecules (ICAM-1 and VCAM-1), and reduced cancer cells adhesion on VEC monolayers (Figure 5 ). Additionally, given the transcription of these adhesion molecules was driven predominantly by the proinflammatory transcription factor NF-κB 14 , phosphorylated p65 was inhibited by TBMS1 treatment (Figure 6 ). Additionally, VECs were strongly connected through adherent junctions, where VE-cadherin was of vital importance for the maintenance and control of endothelial cell contacts 27 , and the initial assessments of TJ proteins, such as ZO-1 and claudin-5 suggested a tumor-suppressive role, with loss/reduction resulting in increased metastasis 12 , 16 . Here, we found that TBMS1 treatment enhanced the expression of these adherent junction proteins to reduce the vascular permeability of endothelial monolayers, as determined by evaluating the passage of dextran (Figure 5 ). Finally, the hub proteins phosphorylation of Akt, Erk1/2 and STAT3, which were reported to regulate adhesion and the subsequent trans-endothelial migration process, were found to be decreased after TBMS1 treatment (Figure 5 ). These results suggest that TBMS1 reduces tumor microvessels by interfering in the activation of multiple signaling pathways that are critical for angiogenesis. Although multiple molecular and a variety of pathways have been recognized as possible targets of TBSM1 6 , the precise binding targets of TBSM1 using proteome microarray, co-immunoprecipitation, and other assays has not been reported and would be clarified by our further studies. Finally, a study was shown that TBMS I promoted angiogenesis via activation of eNOS-VEGF signaling pathway and acted as a novel agent for therapeutic angiogenesis in ischemic diseases 28 , which is contradictory with our research. However, this apparent paradox is neither unique to TBMS1, nor unexplained. The dual activity is frequently seen with natural drugs 29 . Therefore, it is not surprising to observe that TBMS1 was found to promote angiogenesis at low concentration (0.5-2 μM) in normal tissues, but to trigger anti-angiogenesis at high concentration (5 and 10 μg/mL, equal to 3.79 and 7.58 μM) in tumor tissue. Together, TBMS1 can been acted as well-modulators of angiogenesis homeostasis.
Conclusion Our study proposes that TBMS1 suppresses endothelial cells' abnormal activity and function involved in cancer cell adhesion and tumor vascular permeability. These findings suggest that TBMS1 might be a powerful inhibitor of tumor microvessels by suppressing tumor adhesion and vascular permeability, ultimately reducing tumor growth and metastasis.
*These two corresponding authors contribute equally to this article. Competing Interests: The authors have declared that no competing interest exists. Objective: Tubeimoside-1 (TBMS1) is a plant-derived triterpenoid saponin that exhibits pharmacological properties and anti-tumor effects, but the anti-tumor microvessels of action of TBMS1 remains to be completely elucidated. This study aims to verify the effect of TBMS1 on tumor microvessels and its underlying mechanism. Methods: A SKOV3 xenografted mouse model were constructed to evaluate the anti-tumor microvessels of TBMS1 in vivo , followed by function assays to verify the effects of TBMS1 on the proliferation, cell cycle, migration, and tubule formation of vascular endothelial cells in vitro . Next, based on network pharmacology, the drug/disease-target protein-protein interaction (PPI) networks, biological functions and gene enrichment analyses were performed to predict the underlying mechanism. Finally, molecules and pathways associated with tumor trans-endothelial migration were identified. Results: TBMS1 treatment effectively reduced tumor microvessel density in ovarian cancer model and inhibited the proliferation, cell cycle, migration, and induced apoptosis of vascular endothelial cells in vitro . Network pharmacological data suggested that tumor cell adhesion and trans-endothelial migration may participate in antiangiogenic effects of TBMS1. By endothelial adhesion and permeability assay, we identified that tumor adhesion and the permeability of endothelial monolayers were reduced by TBMS1. Furthermore, adhesion protein (VCAM-1and ICAM-1) and tight junction (TJ) proteins (VE-cadhsion, ZO-1 and claudin-5) were found to be regulated. Finally, Akt, Erk1/2, Stat3 and NF-κB signaling were decreased by TBMS1 treatment. Conclusion: To sum up, our findings strongly suggest that clinical application of TBSM1 may serve as a vasoactive drug treatment to suppress tumor progression.
Supplementary Material
This work was supported by Wu Jie Ping Medical Foundation (No. 320.6750.18315).
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):955-965
oa_package/13/d7/PMC10788730.tar.gz
PMC10788731
0
Introduction Common tumors, such as breast, lung and prostate cancer frequently metastasize to multiple bones in the body and induce significant bone pain. The mechanisms of bone cancer pain are highly complicated. Cancer cells metastasize to the bone where they release algogenic substances, protons, and acidosis acting on the receptors of peripheral nociceptors 1 , 2 and thereby inducing peripheral sensitization. The spinal cord receives the input from the primary afferent neurons. The activated spinal neurons can in turn release various excitatory neurotransmitters including glutamate, adenosine triphosphate (ATP) and calcitonin gene-related peptide (CGRP) to act on their receptors on postsynaptic neurons, which can increase intracellar Ca 2+ levels 3 , 4 . Upregulation of Ca 2+ facilitates signal transmission by activating Ca 2+ -sensitive proteins, such as calcium/calmodulin-dependent protein kinase (CaMKII) and mitogen-activated protein kinase (MAPK) to further enhance excitability in spinal cord neurons. The detailed role and mechanisms of D1DR or D2DR in chronic bone cancer pain largely remains unexplored. D1DR and D2DR were reported to be mainly expressed in neurons of the spinal cord 5 , 6 . Activated spinal neurons plays an important role in the development and maintenance of chronic pain 7 . Inhibiting the activation and excitability of spinal neurons could markedly attenuate bone cancer pain 8 . It has been reported that D1DR couples to Gs/olf proteins to activate cyclic adenosine monophosphate (cAMP) and D2DR couples to Gi/o proteins to inhibit adenylcyclase (AC) 9 . It has been widely accepted that activated cAMP signaling can phosphorylate cAMP response element-binding protein (CREB) to increase neuronal excitability in rodent hippocampal neurons 10 and striatal neurons 11 , while inhibiting cAMP decreases neuronal excitability 12 . The dopamine D1/D2DR heteromers was first identified in rat striatum 13 , and was reported to be coupled to the Gq/11 protein, a finding that suggested a direct link between dopamine and calcium signaling 14 - 18 . Increased intracellular calcium has been implicated in the increased excitability of neurons 19 and the development of chronic pain. Our previous research indicated that spinal dopamine D1 and D2 receptor form heteromers in spinal cord in neuropathic pain 20 . Herein, we further tested the hypothesis that D1DR and D2DR might form heteromers to induce the activation of spinal neurons thereby promoting the development of bone cancer pain. This study aimed to investigate the role and mechanisms of spinal D1DR, D2DR, and their heteromers in bone cancer pain. Corydalis yanhusuo W.T. Wang is one of the most famous analgesic is China, and tetrahydroproberberines were the main active ingredients and has been shown to act as dopamine agonist or antagonist 21 , 22 . l- CDL, one of the trace tetrahydroproberberines of Corydalis yanhusuo W.T. Wang has been reported to exert strong anti-nociception without notable side effects 23 - 26 . Our previous researches showed that l- CDL could inhibit the formation of D1/D2DR complex to alleviate neuropathic pain 20 , so the present study further investigated whether it could inhibit the D1/D2DR complex in bone cancer pain relief.
Materials and Methods Ethics Statement All experimental protocols were approved by the Animal Experimentation Ethics Committee of China Pharmaceutical University and adhered to the guidelines of the International Association for the Study of Pain (IASP). Meanwhile, the experiments we did were designed to minimize suffering and the number of animals used. Experimental Animals Sprague-Dawley rats weighing 180-220 g and 60-80 g were purchased from the Experimental Animal Center at Yangzhou University (Jiangsu Province, China, SCXK-SU-2016-0011). Rats were housed three per cage in a temperature and humidity-controlled environment on a 12 h light/dark cycle for 3-7 days to allow acclimatization. Rats were anesthetized with pentobarbital (50 mg/kg, i.p.) and euthanized with carbon dioxide. Subsequently, the rats were randomly allocated to the following groups: 1) Control; 2) TCI; 3) TCI + D1DR/D2DR antagonists (20 μg/20 μL, i.t.); 4) TCI + D1DR/D2DR siRNA (1 μg/20 μL, i.t.); 5) TCI + D1DR/D2DR/D1/D2DR heteromer agonists (2 μg/20 μL, i.t.); 6) TCI + D1DR/D2DR/D1/D2DR heteromer agonists (2 μg/20 μL, i.t.) + D1DR/D2DR antagonists group (20 μg/20 μL, i.t.); 7) TCI + l -CDL (15 mg/kg, p.o. or 15 μg/20 μL, i.t.); 8) TCI + D1DR/D2DR/D1/D2DR heteromer agonists (2 μg/20 μL, i.t.) + l -CDL (15 mg/kg, p.o. or 15 μg/20 μL, i.t.). Behavioral testing was performed during the light cycle (between 9:00 a.m. and 5:00 p.m.). Six animals were assigned to each group for behavioral test and four animals were assigned to each group for molecular testing. Materials levo -Corydalmine (purity ≥ 99.0%, as detected by HPLC) was provided by China Pharmaceutical University (Nanjing, China). Anti-p-p44/42 MAPK (p-ERK1/2) (#4377S), anti-ERK1/2 (#4695S), anti-p-JNK (#9255S), anti-JNK (#9252S), anti-p-p38 MAPK (p-p38) (#9215S), anti-p38 MAPK (#9212S), anti-p-CaMKII (#12716S), and anti-CaMKII (#3362S) were purchased from Cell Signaling Technology (Beverly, MA). Anti-D1DR was purchased from Abcam (#ab20066) (Cambridge, MA) and Santa Cruz Biotechnology (#sc-31479) (Santa Cruz, CA). Anti-D2DR was from purchased Santa Cruz Biotechnology (Santa Cruz, CA) (#sc-5303). Neurobasal medium, fetal bovine serum and RPMI 1640 medium were purchased from Gibco (Gaithersburg, MD). Trypsin and soybean trypsin inhibitors were obtained from Atlanta Biologicals (Norcross, GA). Agonists and antagonists were purchased were from Tocris Bioscience (Ellisville, MO), NHS magnetic beads was purchased from the Enriching biotechnology (Nanjing, China), all other reagents were purchased from Sigma-Aldrich (St. Louis, MO). The siRNA targeting D1DR (NM_012546) and D2DR (NM_012547) were synthesized by GenePharma Co. (Shanghai, China). The respective sequences were as follows, sense: A 5'-GGUGACCAACUUCUUUGUCTT-3', B 5'-GACAAAGAAGUUGGUCACCTT-3', antisense: A 5'-CUACUAUGCCAUGCUGCUCTT-3', B 5'-GAGCAGCAUGGCAUAGUAGTT-3'. Nonspecific oligonucleotide controls consisted in randomly scrambled sequences of siRNA groups (conRNA). 33 μg siRNA and 49.5 μg polyethyleneimine (PEI) was diluted in 165 μL of 5 % glucose solutions respectively and were mixed and incubated for 15 min at RT before use 27 , 28 . For the siRNA group, each rat received multiple daily intrathecal injections of D1DR and D2DR siRNA mixed solution (1 μg/20 μL) for 8 consecutive days, and the control RNA group receive conRNA (1 μg/20 μL) for 8 days. Antinociception was measured at 0.5 h after siRNA treatment for 1-7 day and at 0.5 h, 2 h, 4 h, and 8 h on the 8 th day after siRNA treatment. Model of Bone Cancer Pain induced by Intra-Tibia Inoculation of Walker 256 Mammary Gland Carcinoma Cells TCI-induced bone cancer pain was established according to our previous research 29 . Walker 256 mammary gland carcinoma cells (5×10 6 cells/mL, 0.5 mL) were intraperitoneally injected into rats weighing 60-80 g. The ascites were extracted and centrifuged at 400 g for 6 min to get the cells 5-7 days later. The cells were washed with iced 0.01 M phosphate-buffered saline (PBS) and then diluted to a density of 1×10 5 cells/μL with 0.01 M PBS. Rats were anesthetized and the tibia head of the left leg was exposed with minimal damage. Next, 5 μL Walker 256 ascites tumor cells were slowly injected into the medullary cavity, and 5 μL PBS were injected as a control. To stop the cells from coming out, the syringe was held still for 1 min and bone wax was subsequently applied for 3 min. The injection hole was closed with dental materials. Behavioral Assays for Bone Cancer Related Pain Before the test, rats were placed in individual transparent plastic mesh cage to accommodate the environment, then a series of Von Frey hairs (1.4-15.0 g) were used to stimulate the hind paw of rats with logarithmically incremental stiffness for about 6 s each. A positive response was defined as a quick withdrawal or licking of hind paw upon the stimulus. Whenever a positive response to a stimulus occurred, the next lower Von Frey hair was applied, and vise verse. Each rat was tested three more times and the applied force (g) was recorded. Then the average of the threshold was measured as mechanical withdrawal threshold (MWT) 29 . Intrathecal Injection Procedure The rat was placed in a prone position and the midpoint between the tips of the iliac crest was located. Using a stainless steel needle (30 gauge) by means of lumbar puncture at the intervertebral space of L4-5 or L5-6. The injection did not affect the baseline pain threshold of the rats and a proper injection would be accompanied by a tail flick 30 . Primary Cultures of Spinal Neurons The spinal cords of the embryos were removed aseptically on day 13 of gestation 31 and digested in 0.15 % typsin at 37 °C for 25 min. The cell suspension was centrifuged at 200 g for 4 min and then resuspended in solution containing DNase and soybean trypsin inhibitor. The solution containing MgCl 2 and CaCl 2 was added to the cells for 15 min, and the supernatant was collected and centrifuged at 200 g for 4 min. Neurobasal plating medium containing 10 % fetal bovine serum (FBS) supplement and 1 % l- glutamine was added. Cells were planted onto poly- l- lysine-pretreated 96-well (9 mm) clear-bottomed black plates with a density of 2.5×10 6 cells/well 19 , 32 . Measurement of Intracellular Ca 2+ Concentration On day 9, the dye loading buffer containing 4 μM fluo-8 was added (100 μL/well) and incubated for 1 h. Then the cells were subsequently washed 5 times with Locke's buffer (the vehicle), leaving a final volume of 150 μL in each well. The plate was then transferred to a FLIPR (Molecular Devices, Sunnyvale, CA) chamber. Fluorescence reading was taken for 5 min to establish the baseline, and then the first test compound solutions (8 ×) (25 μL) were added to the corresponding well. 5 min after the fluorescence readings were taken for, the second compounds (25 μL) were added to the cells and the fluorescence readings were taken for another 10 min. The supernatant of primary culture spinal astrocytes stimulated with LPS for 12 h was added to the spinal neurons at the day 9 for 0.5 h before the measurement of Ca 2+ . Western Blotting In brief, spinal cord segments at L4-L6 were collected at 2 h after drug treatment and lysed in RIPA. The supernatant was collected and separated on sodium dodecyl sulfate-polyacrylamide gels, and transferred onto polyvinylidene difluoride membranes. The membranes were blocked with 5 % bovine serum albumin (BSA) for 2 h at room temperature (RT) and incubated with primary antibodies for 3 h at RT and overnight in 4 °C. Subsequently, the membranes were washed with 0.1 % tris buffered saline tween (TBST) and incubated with secondary antibodies (1:3000) at RT for 2 h. The immunoreactivity was detected using enhanced chemiluminescence (ECL) regents (PerkinElmer, Waltham, MA). Data were analyzed with the associated software Quantity one-4.6.5 (Bio-Rad Laboratories). Immunofluorescence The L4-L6 spinal cords were collected after the rats were perfused with 0.01 M PBS followed by 4 % paraformaldehyde (PFA) on day 14 after the model was established. The spinal cords were post-fixed with the same 4 % (PFA) for 1 day and then transferred to 30 % sucrose for 3-5 days. The spinal cords were cut into 25 μm thick segments and blocked with 10 % normal donkey serum containing 0.3 % Triton-X-100 (Sigma-Aldrich, St. Louis, MO). Subsequently, the sections were incubated with the primary antibodies for 16-19 h at 4 °C and then incubated with secondary antibodies for 2 h after washed with 0.01 M PBS. The tissue sections were washed with PBS and mounted to be observed under a laser-scanning microscopy (Carl Zeiss LSM700, Germany). To obtain quantitative measurements, 8 images were evaluated for each group and photographed at the same exposure time to generate the raw data. Fluorescence intensities in the different groups were analyzed using Image Pro Plus 6.0 (Media Cybernetics, Silver Spring, MD, USA). Co-immunoprecipitation In brief, 10 μg primary antibody was diluted with 500 μL coupling buffer (rat anti-D1DR or mouse anti-D2DR) and added to the NHS magnetic beads. After incubated sufficiently for 4 h at 4 °C, the supernatant was removed and 500 μL blocking buffer was added to incubated for 1 h at 4 °C. Tissues (spinal cord segments at L4-L6 of rats) were lysed in ice-cold RIPA buffer, and incubated with beads-Ab heteromers overnight at 4 °C. After that, the immunoprecipitates were incubated with 100 μL elution buffers for 5 min at RT to dissociate the heteromers. The supernatant was transferred and incubated in SDS sample buffer for 10 min at 100 °C. Statistical Analysis All values are depicted as mean ± SEM and the statistical analyses were performed using SPSS Rel 15 (SPSS Inc., Chicago, IL, USA). Data of western blot, immunofluorescence and behavioral tests were statistically analyzed by one-way analysis of variance (ANOVA) and two-way ANOVA followed by Bonferroni's post-hoc tests with significance at P < 0.05.
Results Blockade of spinal dopamine D1DR and D2DR attenuated TCI-induced bone cancer pain and the expression of D1/D2DR heteromers were significantly increased in TCI-induced bone cancer pain Our results indicated that intrathecal administration of D1DR and D2DR antagonists (5, 10, and 20 μg/20 μL) significantly attenuated TCI-induced bone cancer (Figure 1 A, B). Intrathecal administration of D1DR and D2DR siRNA (1 μg/20 μL) for 7 and 8 days respectively also alleviated TCI-induced chronic pain, while control RNA (conRNA) did not affect the mechanical threshold of rats (Figure 1 C). Western blot results showed that in the siRNA group, the expression of D1DR and D2DR was decreased in the spinal cord, compared to that in the control (Figure 1 D). Furthermore, the immunofluorescence (Figure 1 E) and Co-IP (Figure 1 F) results showed that D1DR and D2DR co-expressed in the spinal cord of rats. The co-expression of D1DR and D2DR was significantly upregulated in TCI-induced bone cancer pain. Intrathecal administration of D1DR and D2DR antagonists decreased D1DR and D2DR co-expression. Spinal D1DR and D2DR formed complexes to promote TCI-induced bone cancer pain through the Gq-PLC-IP3 pathway It has been reported that both D1DR and D2DR antagonists could reduce D1/D2DR heteromers 16 , 33 . Our results showed that the antinociception induced by D1DR antagonist SCH 23390 (Figure 2 A-C) and D2DR antagonist L-741,626 (Figure 2 D-F) could not only be reversed by D1DR agonist SKF 38393, but also alleviated by D2DR agonist Quinpiride, and D1/D2DR heteromer agonist SKF 83959. Furthermore, Gq inhibitor YM 254890, PLC inhibitor U73122, IP3 inhibitor 2-APB, and AC inhibitor SQ22536 could attenuate TCI-induced chronic bone cancer pain (Figure 2 G). To explore whether D1DR antagonist-induced antinociception was mediated by cAMP, D1DR agonist SKF 83822, which exclusively activated the cAMP was used. Antinociception induced by D1DR antagonist SCH 23390 (Figure 2 H) and D2DR antagonist L-741,626 (Figure 2 I) could not be reversed by SKF 83822. D1DR, D2DR, and D1/D2DR agonists-upregulated Ca 2+ oscillations in primary cultured spinal neurons could be inhibited by D1DR and D2DR antagonists Our results showed that both D1DR antagonist SCH 23390 and D2DR antagonist L-741,626 could eliminate the basal synchronous Ca 2+ oscillations activity in primary cultured spinal neurons (Figure 3 A and B, trace 2-3; time scale 300-600 seconds). Administration of D1DR agonist SKF 38393, D2DR agonist Quinpiride, and D1/D2DR heteromer agonist SKF 83959 could increase Ca 2+ oscillations in spinal neurons (Figure 3 A-F, trace 4; time scale 600-900 seconds), which could be eliminated with D1DR and D2DR antagonists, respectively (Figure 3 A-F, trace 4-6; time scale 600-850 seconds). Herein, the supernatant of primary cultured spinal actrocytes stimulated with LPS for 12 h was added to the spinal neurons at the day 9 for 0.5 h. Intrathecal administration of D1DR and D2DR antagonists/siRNA inhibited the expression of p-CaMKII, p-ERK, p-JNK, and p-p38 in the spinal cord D1/D2DR heteromers couple to Gq to increase intracellular Ca 2+ , which in turn activate several kinases including CaMKII and MAPKs 34 . Our results herein indicated that D1DR and D2DR antagonists could decrease the upregulated expression of p-CaMKII, p-ERK, p-JNK, and p-p38 in the spinal cord (Figure 4 A-D). And D1DR and D2DR siRNA also significantly decreased the expression of p-CaMKII, p-ERK, p-JNK, and p-p38 (Figure 4 E-H). l- CDL induced antinociception in TCI rats could be reversed by D1DR, D2DR, and D1/D2DR heteromer agonists Our study showed that l- CDL showed high affinity to D1DR (see Additional file 1: Figure s1 ) and D2DR 35 with a half maximal inhibitory concentration (IC50) of 0.20 μM and 0.86 μM, respectively. Herein, our results indicated that D1DR agonist SKF 38393, D2DR agonist Quinpiride, and D1/D2DR heteromer agonist SKF 83959 could reverse both intragastic (15 mg/kg) (Figure 5 A-C) and intrathecal (15 μg/20 μL) (Figure 5 D-F) administration of l- CDL induced antinociception in TCI rats (SKF 38393, Quinpiride, and SKF 83959 were administrated 15 min before l- CDL treatment). However, l- CDL (15 μg/20 μL, i.t.) induced antinociception could not be reversed by SKF 83822 (Figure 5 G). l- CDL induced inhibition of p-CaMKII, p-ERK, p-JNK, and p-p38 could be reverse by D1DR, D2DR, and D1/D2DR heteromer agonists in the spinal cord of TCI rats Our previous study indicated that l- CDL markedly alleviated TCI-induced bone cancer pain. Herein, our results further confirmed that l- CDL (15 mg/kg, p.o.) could decrease the expression of p-CaMKII, p-ERK, p-JNK, and p-p38. Intrathecal administration of D1DR agonist SKF 38393 (Figure 6 A-D), D2DR agonist Quinpiride (Figure 6 E-H), and D1/D2DR heteromer agonist SKF 83959 (Figure 6 I-L) (15 minutes before l- CDL on the 14th day after TCI) reversed l- CDL (15 μg/20 μL, i.t.) induced inhibition of upregulated p-CaMKII, p-ERK, p-JNK, and p-p38 in TCI-induced bone cancer pain. These results suggested that p-CaMKII, p-ERK, p-JNK, and p-p38 are involved in D1/D2DR heteromers mediated analgesia of l- CDL.
Discussion In this study the principal findings are as follows: (1) Intrathecal administration of D1DR/D2DR antagonists or siRNA could significantly alleviate TCI-induced bone cancer pain; (2) D1DR and D2DR form heteromers in spinal neurons that promote bone cancer pain by activating Gq protein and thereby increasing neuronal excitability, leading to the activation of CaMKII and MAPK signaling; (3) l- CDL, a natural compound could attenuate TCI-induced chronic bone cancer pain through inhibiting D1/D2DR heteromers. Activating or antagonizing spinal D1DR and D2DR have been reported could inhibit the development of pain 36 - 39 . And D1DR and D2DR have been reported to form heteromers in rat or monkey brain and have a potentially considerable influence in disorders such as drug addiction, schizophrenia and depression 15 - 18 . Our previous research also confirmed that D1DR and D2DR form complex to promote neuropathic pain 20 . Herein, further results indicated that D1DR and D2DR could form heteromers in the spinal cord, and that intrathecal administration of both D1DR and D2DR antagonists could inhibit the heteromers in TCI rats which was consistent with the previous findings that D1/D2DR heteromers mediated signaling could be attenuated by D1DR and D2DR antagonists, respectively 16 , 33 . D1DR and D2DR antagonists-induced antinociception could be alleviated by D1DR, D2DR, and D1/D2DR heteromer agonists, which indicated that D1DR and D2DR antagonists attenuate TCI-induced bone cancer through inhibiting D1/D2DR heteromers. D1/D2DR heteromers were reported to couple to Gq, which might lead to intracellular calcium mobilization from IP3 receptor-sensitive stores through a cascade of events involving rapid translocation of Gq to plasma membrane and activation of PLC. Gq protein, IP3-mediated calcium signaling, and PLC 40 , 41 have all been reported to mediate nociceptor sensitization. Their effects in bone cancer pain have not been explored, however. To further confirm whether D1DR and D2DR antagonists attenuate TCI-induced bone cancer pain through inhibiting D1/D2DR heteromers with the subsequent suppression of the activation of the Gq-IP3-PLC pathway, the effects of Gq inhibitor YM254890, IP3 inhibitor 2-APB, and PLC inhibitor U73122 on bone cancer pain were explored. Intrathecal administration of YM 254890, 2-APB, and U73122 could all attenuate TCI-induced bone cancer pain. We also wondered whether D1DR and D2DR attenuated chronic bone cancer pain through Gi/o or Gs proteins. It has been reported that Gi/o protein inhibitor Pertussis toxin (PTX) produced hyperalgesia and allodynia 42 . Herein, intrathecal administration of AC inhibitor SQ22536 could also attenuate TCI-induced bone cancer pain. In order to verify whether D1DR antagonist can attenuate TCI-induced bone cancer pain through the activation of AC through Gs protein, SKF 83822, which exclusively activates the cAMP 16 , 43 was used. D1DR and D2DR antagonists-induced antinociception could not be reversed by SKF 83822 indicating that D1DR and D2DR antagonists-induced antinociception was not mediated through the inhibition of the AC pathway. Further in vitro studies were conducted to confirm whether D1DR and D2DR form heteromers that increase the excitatory state of neurons. Spontaneous Ca 2+ transients have been implicated in regulating plasticity in developing neurons 44 . Primary cultured spinal neurons display synchronized spontaneous Ca 2+ oscillations 19 . Our study showed that administration of D1DR and D2DR antagonists could reduce the Ca 2+ oscillations in spinal neurons. Administration of D1DR, D2DR, and D1/D2DR heteromer agonists increased Ca 2+ oscillations, which in turn could be reduced by D1DR and D2DR antagonists. D1/D2DR heteromers couple to Gq, which activates CaMKII that in turn also promote the development of chronic pain through activating MAPK 34 . Expression of p-CaMKII, p-ERK, p-JNK, and p-p38 was observed. Both D1DR, D2DR antagonists, and siRNA could also inhibit the expression of p-CaMKII, p-ERK, p-JNK, and p-p38. The important role of D1/D2DR heteromers in TCI-induced bone cancer pain make them an attractive target to attenuate bone cancer pain. l- CDL, a trace ingredient from traditional Chinese medicine could significantly attenuate chronic bone cancer pain and other models of neuropathic pain in our previous studies without notable side effects 25 , 26 , 29 . l -CDL belongs to the tetrahydroprotoberberines (THPBs), which have been reported to show affinity to dopamine receptors and possess a variety of beneficial effects without notable side effects 21 , 22 . l- CDL showed high affinities to both D1DR (see Additional file 1: Figure s1 ) and D2DR 35 with IC50 of 0.20 μM and 0.86 μM respectively. Herein, our results indicated that l- CDL-induced antinociception could be reversed by D1DR, D2DR, and D1/D2DR heteromer agonists but not SKF 83822, which exclusively activates the cAMP. l- CDL-induced inhibition of p-CaMKII, p-ERK, p-JNK, and p-p38 could be alleviated by D1DR, D2DR, and D1/D2DR heteromer agonists. l -CDL has been reported in our previous study to alleviate TCI-induced chronic bone cancer pain through inhibiting NMDA and mGlu1/5 receptors 29 . Synergistic effects in such a multi-targets approach might largely explain the strong observed effects in attenuating bone cancer pain. We provided the first experimental evidence that spinal D1DR and D2DR might promote chronic bone cancer pain through forming D1/D2DR heteromers, thereby leading to the activation of Gq proteins and the downstream CaMKII and MAPK signaling to increase excitability in spinal neurons. l -CDL, a natural compound, was found to attenuate TCI-induced chronic bone cancer pain by antagonizing spinal D1DR and D2DR to inhibit D1/D2DR heteromers, and in turn downstream CaMKII and MAPKs signaling. Altogether, the findings may provide new avenues to find more effective and safer targets for chronic bone cancer pain.
# These authors contributed equally to the work. Competing Interests: The authors have declared that no competing interest exists. Background: Dopamine receptors have been reported to be involved in pain, while the exact effects and mechanism in bone cancer pain have not been fully explored. Methods: Bone cancer pain model was created by implanting walker 256 mammary gland carcinoma into right tibia bone cavity. Primary cultured spinal neurons were used for in vitro evaluation. FLIPR, western-blot, immunofluorescence, and Co-IP were used to detect cell signaling pathway. Results: Our results indicated that spinal dopamine D1 receptor (D1DR) and spinal dopamine D2 receptor (D2DR) could form heteromers in TCI rats, and antagonizing spinal D1DR and D2DR reduced heteromers formation and alleviated TCI-induced bone cancer pain. Further results indicated that D1DR or D2DR antagonist induced antinociception in TCI rats could be reversed by D1DR, D2DR, and D1/D2DR heteromer agonists. And Gq, IP3, and PLC inhibitors also attenuated TCI-induced bone cancer pain. In vitro results indicated that D1DR or D2DR antagonist decreased the Ca 2+ oscillations upregulated by D1DR, D2DR, and D1/D2DR heteromer agonists in activated primary cultured spinal neurons. Moreover, inhibition of D1/D2DR heteromers induced antinociception in TCI rats was partially mediated by the CaMKII and MAPKs pathway. In addition, a natural compound levo -Corydalmine ( l- CDL), could inhibit D1/D2DR heteromers and attenuate bone cancer pain. Results: Inhibition of spinal D1/D2DR heteromers via l- CDL decreases excitability in spinal neurons, which might present new therapeutic strategy for bone cancer pain.
Supplementary Material
Funding Chinese National Natural Science Foundation Youth Fund project (Grant number, 81803752); "Double First-Class" University project (Grant number, CPU2018GY32); China Postdoctoral Science Foundation program (Grant number, 1600020009); China Postdoctoral Special Funding program (Grant number, 1601900013), the Fundamental Research Funds for the Central Universities (Grant number, 2632023TD06) all contributed to this work. We would like to thank Editage (www.editage.com) for English language editing. Ethics approval and consent to participate All procedures were strictly performed in accordance with the guidelines of the International Association for the Study of Pain and the Guide for the Care and Use of Laboratory Animals (The Ministry of Science and Technology of China, 2006). All animal experiments were approved by the Animal Experimentation Ethics Committee of China Pharmaceutical University. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Author contributions X-N M: Conceptualization, Data curation, Investigation, Writing - original draft. C-H Y: Conceptualization, Data curation, Investigation, Methodology. Y-J Y: Conceptualization, Data curation, Formal analysis, Validation. X L: Conceptualization, Data curation, Investigation, Validation. M-Y Z: Investigation, Validation. J Y: Investigation. S Z: Investigation. B-Y Y: Writing - review & editing. W-L D: Supervision, Funding, acquisition Supervision, Writing - review & editing. J-H L: Conceptualization, Funding, acquisition Supervision, Writing - review & editing. Abbreviations dopamine D1 receptors dopamine D2 receptors tibia bone cavity tumor cell implantation Fluorometric imaging plate reader co-immunoprecipitation adenosine triphosphate calcitonin gene-related peptide calcium/calmodulin-dependent protein kinase mitogen-activated protein kinase cyclic adenosine monophosphate adenylcyclase tetrahydroprotoberberines levo -Corydalmine bovine serum albumin tris buffered saline tween
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1041-1052
oa_package/00/60/PMC10788731.tar.gz
PMC10788732
0
Background Glioblastoma is the most aggressive type of intracranial malignancy, accounting for 33% of all intracranial tumors 1 . It can be diagnosed by a variety of techniques such as positron emission tomography, computed tomography, and magnetic resonance imaging 2 . At present, the treatment method of GBM is mainly surgery, supplemented by chemoradiotherapy with the standards chemotherapy drug temozolomide 3 , 4 . However, due to the heterogeneity and complexity of tumor cells, high local aggressiveness and prominent neovascularization, patients with GBM have a poor prognosis 5 , 6 . In addition, glioblastoma harbors stem cells, which have the ability of differentiation and self-renewal and are highly resistant to radiotherapy and chemotherapy, which is further affecting the prognosis and survival rate of patients 7 , 8 . Hence, it is necessary to explore new therapeutic methods in clinical treatments of GBM. In order to improve the prognosis for patients with GBM, many teams have tried to find new treatments, including monoclonal antibodies, small molecule inhibitors and cancer vaccines 9 , 10 . Meanwhile, emerging tumor inhibitors targeting DNA damage/repair pathways, tumor suppressor protein p53, growth factor receptors, cell cycle control enzymes/genes, and their downstream pathways, are used as alternative/supplementary anti-cancer strategies. These targeted drugs can also be used as radiosensitizers to enhance the cytotoxicity of radiation therapy while minimizing harmful side effects on surrounding normal tissue 11 . Therefore, we aim to expand the list of molecular targets and to design novel small molecule inhibitors with central nervous system penetration to gain new insights for GBM treatments. NCDN, a 79 kDa-sized cellular protein, is highly conserved in vertebrates 12 . Its expression in nerve cells is relatively specific, while its expression in skeletal muscle, heart, kidney, and myeloid cells is much less 13 - 16 . In nerve cells, NCDN is localized to the cyton, dendritic axis, and dendritic spines in a vesicle-like structure, and is able to maintain cell polarity by regulating dendritic morphogenesis and nerve cell signaling pathways 17 - 19 . It has been found that the absence of NCDN can increase the production of reactive oxygen species, thus affecting a wide range of pathogens and signaling pathways. It also plays a role in intracellular transport by regulating the localization of signal proteins such as P-Rex1 20 , 21 . As an endogenous regulator of mGluR5, NCDN can negatively regulate the phosphorylation of Ca2+/calmodulin-dependent protein kinase II, affect neurite growth and synaptic plasticity 22 . In addition, anti-NCDN is found to be a novel antibody related to autoimmune ataxia, therefore, NCDN can be used as a potential target antigen for autoimmune neurodegenerative diseases 23 , 24 . However, we did not find any studies on the effect of NCDN on GBM, and the clinical application value of NCDN remains to be clarified. In our study, the RNA sequencing data of GBM samples were acquired from TCGA to select key genes associated with glioblastoma prognosis. Three genes ( NCDN , PAK1 , and SPRYD3 ) associated with the prognosis of GBM were selected by GO analysis, functional enrichment, WGCNA and survival analysis. In vitro cell models were subsequently established to investigate the effects of these selected key genes on the cellular functions in glioblastoma cell lines.
Materials and methods Data sources and processing Clinical information and RNA sequencing data of patients were downloaded from the 'GBM' cohort of TCGA ( https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga ) ( Table 1 ). After excluding the samples that had undergone chemotherapy and radiotherapy before, six normal samples and 794 GBM samples were selected for follow-up analysis. Acquisition of GBM differentially expressed genes (DEGs) Limma differential analysis was performed using the R software package limma (version 3.40.6) to select DEGs between case and control groups. Specifically, lmFit function was used for multiple linear regression, and eBays function was further used for Computed T-statistics, Moded F-statistics, and log-odds of differential expression. We considered P < 0.05 to be statistically significant. Functional enrichment of DEGs Enrichment analysis was performed by R package clusterProfiler (version 3.14.3). We considered P < 0.05 to be statistically significant. WGCNA and acquisition of GBM hub genes Gene expression profiling was employed to remove genes with a standard deviation of 0 in each sample, the R package WGCNA's goodSamplesGenes method was used to remove outliers and samples. WGCNA was further used to build scale-free co-expression networks. We calculated the corresponding module members (MM) and gene significance (GS) of each gene, and identified hub genes based on the cut-off criteria (|GS| > 0.6 and |MM| > 0.8). Survival analysis of hub genes We analyzed the survival by R packet survival, and evaluated the prognostic significance of each gene by Cox regression analysis. Glioma tissue samples Human GBM cell lines (U-251 MG, U-87 MG) and human astrocytes (NHA) were obtained from Beijing BNA Chuanglian Biotechnology Institute. Short tandem repeat analysis showed correct cell typing results. Quantitative reverse transcription PCR (RT-qPCR) analysis Cells were lysed with TRIzol Reagent from Solarbio (Ribobio, Guangzhou, China), total RNA was extracted, and the RNA concentrations were determined by micronucleic acid detector (Zheke Instrument Equipment Co., Ltd, Zhejiang, China). cDNA was obtained using a reverse transcription kit (Vazyme, Nanjing, China) and amplified via fluorescence quantitative PCR (Bio-Rad, Zhejiang, China). Relative expression levels were calculated with the 2 -△△Ct method using ACTB as the reference gene. The primer sequences of NCDN used for PCR are as follows (from 5' to 3'): Forward: CTGCCTGACAGGGTGGAGATTG Reverse: TGGGACTGTGATAGAGAGGATGG Cell transfection For transfecting the cells with the plasmid of NCDN shRNA (Guangzhou, China), LipofectamineTM3000 Reagent (ThermoFisher, Shanghai, China) was utilized. The cells were inoculated in 6-well plates and cultured to a density of 30% at 37 °C. Following replacement with Opti-MEM medium, a mixture of plasmid and LipofectamineTM3000 was added. The cells were further cultivated for 48 hours and the follow-up experiment was conducted. Target sequence of NCDN shRNA plasmid: CAAAGCAGGTGACATAGAT. Transwell assay After transfection in six-well plates, cell densities were adjusted to 5 × 10 4 /100 μL. Complete medium (600 μL) containing 20% fetal bovine serum was added to a 24-well plate and a transwell chamber was placed on top. 200 μL of the cell suspension was added to the transwell chamber, incubated in 37 °C for 48 hours. Cells were subsequently fixed with 4% paraformaldehyde, and stained with 0.1% crystal violet. Images were obtained under a microscope and subjected to image-based quantification analyses and statistical analyses. Cell cycle analysis 2 × 10 6 cells were collected and 5 mL 70% ethanol was added. Then the cells were incubated at 4 °C for 4 hours to be fixed and the washed twice with PBS. The staining buffer was mixed with 500ul PI/RNase and incubated with the fixed cells at 37 °C for 30 min. Stained cells were then analyzed by flow cytometry. Apoptosis Assay After being counted, the cells to be tested were washed with PBS, centrifuged at 1000 rpm for five min. The supernatant was discarded, and the cells were resuspended with 1 × binding buffer. 100 ul cell suspension was transferred into the flow tube, 5 ul Annexin V staining solution and 10ul PI staining solution (Beyotime Biotechnology, Shanghai, China) were added. Fluorescence detection and data analysis were performed on a flow cytometer. Statistical analysis Data analysis and mapping were performed using GraphPad Prism 8. Chi-square and T tests were employed to compare differences between experimental and control groups. We considered P < 0.05 to be statistically significant.
Results Data pre-processing RNA sequencing data of GBM samples from TCGA for a total of 1132 cases were initially acquired and after excluding the patients who received chemotherapy and radiotherapy before, 6 normal samples and 794 tumor samples were finally included in this study. Tumors were successfully distinguished from normal samples by principal components, accounting for 16.5% and 7.6% of the observed differences ( Figure 1 A, B ). Acquisition of DEGs and GO enrichment analysis 1527 up-regulated and 1986 down-regulated genes were identified ( Figure 1 C, D ). The GO enrichment analysis results showed that up-regulated DEGs are mainly involved in neurological development, ion transmembrane transport ( Figure 1 E ). In contrast, down-regulated DEGs are mainly involved in immune response and cell activation ( Figure 1 F ). These results are consistent with the current findings of GBM dysfunction, suggesting that these results are credible for further analyses. WGCNA WGCNA analysis was performed based on the expression matrix of the 3513 DEGs and clinical data of 106 GBM samples. Firstly, 106 samples were clustered, all of which were clustered and within the critical threshold (height < 200) with no outliers removed ( Figure 2 A ). WGCNA applied 5 clinical variables ( Figure 2 A ): tumor-normal, status, age, sex, type. According to the gene expression pattern, the obtained differential genes were grouped into different modules. The process yielded 13 co-expression modules: blue, black, brown, turquoise, pink, cyan, darkgreen, grey, lightgreen, royalblue, darkred, darkgrey, grey ( Figure 2 B, C ). The characteristic genes of blue module were negatively correlated with GBM (cor = -0.76, P = 1.2×10 -20 ), while the characteristic genes of turquoise module were positively correlated with GBM (cor = 0.74, P = 0.74×10 -20 ) ( Figure 2 D ). The correlations are further confirmed by heat map ( Figure 2 E ). Hence, the blue and turquoise modules were analyzed to reveal hub genes. Acquisition of candidate hub genes from blue and turquoise modules The results show that there is a significant positive correlation between MM and GS scores in turquoise and blue modules ( Figure 2 F, G ). In the turquoise and blue modules, we identified 588 genes and 1941 genes were identified to meet the thresholds of 'cor. gene MM' > 0.8 and the 'cor. gene GS' > 0.6, respectively. Survival analysis of hub genes Based on the clinical information and expression data of the 106 GBM tumor samples, we examined the potential association between hub genes expression and patient survival. The analysis showed that genes in the turquoise module, including NCDN , PAK1 and SPRYD3 are associated with the prognosis of GBM. Therefore, these genes were defined as "final" central genes ( Figure 3 ). The NCDN gene is upregulated in glioblastoma cells The occurrence and progression of glioblastoma is attributed to the combined action of multiple genes. To clarify whether hub genes are expressed differently in glioblastoma, U87 and U251 as well as normal brain glial NHA cells were selected for experiments. Gene expression was detected by RT-qPCR. The expression of NCDN in U87 and U251 cells was significantly upregulated compared to NHA cells ( Figure 4 ), supporting NCDN may have a promoting effect on glioblastoma. NCDN expression affects the migration ability of glioblastoma To determine the role of NCDN in glioblastoma, the transwell test was performed to detect the migration capacity of U87 cells with NCDN knocked down (by shRNA). In the transwell assay, the mobility was lower in NCDN knocked-down group, clearly indicating a decreased cell migration capacity of the NCDN knockdown group. Our results show a promoting effect of NCDN has on the migratory function of glioblastoma cells ( Figure 5 A, B ). NCDN is involved in the cell cycle of U251 and U87 Flow cytometry are used to determine changes in the cell cycle in U87 and U251 cell lines. Results showed that the proportion of cells in the G0/G1 stage in the NCDN knockdown group was significantly higher, indicating that NCDN knockdown inhibited the cell cycle ( Figure 5 C ). NCDN has an impact on apoptosis in GBM cell lines The effect of NCDN on GBM cell apoptosis was studied by flow cytometry analysis. The results showed that in both GBM cell lines, knockdown of NCDN induced apoptosis ( Figure 5 D ).
Discussion Glioblastoma is an aggressive and lethal malignant brain tumor 25 , 26 . GBM patients who do not receive timely and effective treatment will progress to what is called a higher-grade glioma, leading to a worse prognosis 27 . In this study, we identified biomarkers through bioinformatics analysis of independent patient cohort and verified them through in-vitro experiments. The results show that poor GBM prognosis is related with relatively high expression of NCDN . The transwell test shows that NCDN promoted glioma cell migration. In addition, NCDN knockdown promotes apoptosis and blocks cell cycle. Overall, our results illustrate that NCDN may be an ideal therapeutic target for inhibiting the progression of GBM. Although the diagnosis technology, surgical intervention, and medical methods of GBM have improved in genetal, the long-term survival rate of GBM patients is still rather low, with frequent recurrence and progress 28 , 29 . Finding new targets may become a key challenge of novel GBM therapy. Many studies have found that various genes playing key roles in GBM and affecting a variety of cellular functions in glioblastoma 30 - 32 . For instance, the downregulation of circNDC80 leads to a decrease in GBM cell proliferation, migration, and invasion, which makes circNDC80 a novel therapeutic target and prognostic biomarker for glioblastoma 33 . TRIM56 is elevated in human glioma and its product stabilizes cIAP1 protein via deubiquitination, thereby inhibiting apoptosis and promoting GBM cell proliferation 34 . Upregulation of HOTAIRM1 expression in GBM cells promotes cell migration and invasion, suggesting that targeting HOTAIRM1 is also a possible therapeutic strategy for GBM 35 . Elevated transmembrane protein TMEM230 in GBM can promote tumor cell migration, extracellular stent remodeling, and excessive blood vessels and abnormal formation of blood vessels, so TMEM230 has the potential be a therapeutic target for inhibition of GBM tumor cells and anti-angiogenesis 36 . PDRG1 is abnormally highly expressed in GBM, promoting the migration and proliferation of GBM cells through the MEK/ERK/CD44 pathway 37 . Most current research on NCDN has focused on epilepsy, schizophrenia and depressive behavior, and there is limited research on oncology. One study found that the NCDN-PDGFRA fusion gene was present in the DNA of GBM patients, and its fusion protein could be inhibited by tyrosine kinase 38 . However, no study has yet discussed the effect of NCDN on the malignant biological behavior of GBM. Since NCDN has little sequence homology with other eukaryotic proteins, little is known in terms of its function. In this study, we evaluated the effect of NCDN on GBM through bioinformatics analysis and cell biological function tests. We found that NCDN knockdown inhibits cell migration, promotes apoptosis, and induces cell cycle arrest. Our research provides a new theoretical basis for the pathogenesis and progression of GBM. The number of normal samples in TCGA data used in this study is very small, so we will conduct additional research with balanced sample size in the future. In addition, we didn't get the expected results in the transwell experiment of U251 cells, and we will include more GBM cell lines for further research to further clarify the influence of NCDN on different GBM cell lines. Finally, we need to perform more cell and animal studies in the future.
Conclusion Our study shows that NCDN is upregulated in GBM and correlated with patient survival. In summary, NCDN may serve as a potential therapeutic target and biomarker for GBM treatment. Further knockdown experiments in human U251 and U87 cell lines revealed impaired cell migration, apoptosis, and cell cycle arrest. We believe that NCDN has the potential to become an effective target for GBM treatment. However, it is not yet clear how NCDN achieves this. Therefore, it is necessary to continue bioinformatics analysis and conduct more in-depth research in combination with experiments to further elucidate this.
* These authors contribute equally to this work. Competing Interests: The authors have declared that no competing interest exists. Background: Glioblastoma (GBM) is a type of central nervous system malignancy. In our study, we determined the effect of NCDN in GBM patients through The Cancer Genome Atlas (TCGA) data analysis, and studied the effects of NCDN on GBM cell function to estimate its potential as a therapeutic target. Methods: Gene expression profiles of glioblastoma cohort were acquired from TCGA database and analyzed to look for central genes that may serve as GBM therapeutic targets. Then the cell function of NCDN in glioblastoma cell was explored through in vitro cell experiments. Results: Through gene ontology (GO) analysis, weighted gene co-expression network analysis (WGCNA), and survival analysis, we identified three key genes ( NCDN , PAK1 and SPRYD3 ) associated with poor prognosis in glioblastoma. In vitro experiments showed impaired cell migration, apoptosis, and cell cycle arrest in NCDN knockdown cells. Conclusion: NCDN affects the progress and prognosis of glioblastoma by promoting cell migration and inhibiting apoptosis.
Funding This study was supported by Zhejiang Provincial Science and Technology Project [No: 2021Y0184] and Wenzhou science and technology project [No: Y20210279]. Author contributions XK H and CW X wrote the main manuscript text. JX W, XK H and HP D prepared figures 1 - 3 . TT H, SA C and LX Q prepared figures 4 - 5 . JC R and JC Y prepared table 1 . All authors reviewed the manuscript. Availability of data and materials All data and materials for this study is available upon request from the corresponding author. Abbreviations Glioblastoma The Cancer Genome Atlas Gene Ontology Weighted Gene Co-expression Network Analysis Module Members Gene Significance Differentially Expressed Genes
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1067-1076
oa_package/a3/30/PMC10788732.tar.gz
PMC10788733
0
1. Introduction Gastric cancer (GC) is a relatively common digestive tract malignancy with the fourth highest incidence and mortality rate worldwide 1 . GC lacks typical symptoms and signs at an early stage, so most patients are diagnosed in the advanced stage with lymph node and distal invasion 2 , 3 . The detailed molecular biological mechanisms of GC initiation and development are poorly understood. As one of the emerging breakthroughs for cancer therapy, there are still no available and effective targets for immunotherapy 4 . Hence, the prognosis of GC is always poor, with a low five-year survival rate of approximately 20% 5 . It is urgent to explore a satisfactory diagnostic and prognostic tool to guide gastric cancer therapy and improve clinical outcomes. Disulfidptosis is a novel kind of cell death mediated by abnormal accumulation of intracellular disulphides in which glucose transporter inhibitors induce disulfidptosis in glioma cells and suppress tumour growth, suggesting a potential clinical application and treatment strategy 6 . In addition, it has been shown that SLC7A11, SLC3A2, RPN1, and NCKAP1 are the key genes required for the progression of disulfidptosis 6 . To date, the clinical significance and value of disulfidptosis-related genes are unknown, so it is necessary to assess the roles of these genes in gastric cancer. Therefore, we developed and validated a prognostic nomogram and combination diagnosis model for disulfidptosis-related genes in this study. We further investigated the competing endogenous RNA (ceRNA) regulatory mechanisms, biological function, immune microenvironment and immunotherapy-related drugs. We aimed to illustrate the clinical value of disulfidptosis-related genes for improving the distal outcomes of GC patients and characterize the immune landscape to provide novel immunotherapy targets for clinical application.
2. Materials and methods 2.1 Public database retrieval and clinical data acquisition We completely downloaded the clinical and pathological information of gastric cancer and the RNA sequencing (RNA-seq) data collected by The Cancer Genome Atlas (TCGA) database ( https://genome-cancer.ucsc.edu/ ) and normalized RNA-seq data from the Genotype-Tissue Expression (GTEx) data portal ( https://www.gtexportal.org/home/index.html ). Two human GC cell lines (SGC-7903 and MGC-803) and the immortal human stomach cell line GES-1 were obtained from the Shanghai Institute of Biochemistry and Cell Biology, Chinese Academy of Sciences, China. Clinical samples such as GC tissues and paired adjacent nontumorous tissues (5 cm away from the edge of the tumour) were collected from 30 patients who received gastrectomy from the Affiliated Hospital of Medical School of Ningbo University, China, between 2022 and 2023. All patients signed informed consent forms, and this study was approved by the Ethics Committee of the Affiliated Hospital of Medical School of Ningbo University (No. KY20220101). 2.2 Distinguishingly expressed and prognostic disulfidptosis-related gene identification We first compared the expression levels of disulfidptosis-related genes in gastric cancer between the TCGA cohort and normal tissues in the GTEx cohort using t tests or Wilcoxon rank-sum tests. The associations of the expression level of disulfidptosis-related and Mismatch Repair Gene (MMR) were examined in TCGA cohort 7 . The CBio Cancer Genomics Portal ( http://cbioportal.org ) was used to explore multidimensional alterations in disulfidptosis-related genes in TCGA GC sample s 8 . The Kaplan-Meier method was used to analyse the GC survival data from this cohort via R software (version 4.2.1) and the R package survival v 3.3.1. Univariate regression and multivariate regression were utilized to assess significant clinical prognostic factors. The results of the multivariable model were shown as forest plots via the forest plot function in R software. The risk score model was constructed by the sum of each prognostic risk factor with the following formula: risk score = expression level of Gene 1 × β 1 + expression level of Gene 2 × β 2 + ... + expression level of Gene n × β n 9 . All of the patients in the TCGA cohort were computed via the prognostic performance of the risk score model. 2.3 Construction and validation of the disulfidptosis-related prognostic nomogram model The risk factors in the multivariate regression and risk score model were incorporated into the prognostic nomogram model. The 1-, 3- and 5-year OS overall survival time (OS) prediction nomogram model was established by the R packages survival [3.3.1] and rms [6.3-0] in R software. A calibration curve was obtained, and the line on the diagonal 45-degree line suggested an ideal nomogram 10 . Decision curve analysis (DCA) was also performed to assess the clinical net benefit 11 . 2.4 Assessment of diagnostic values of disulfidptosis-related genes The TCGA cohort and receiver operating characteristic (ROC) curve analysis were used to evaluate the diagnostic values of disulfidptosis-related genes. Then, the GTEx cohort was added to validate the diagnostic effectiveness. Combination diagnosis was performed to improve the diagnostic values. 2.5 Biological function analysis of prognostic disulfidptosis-related genes The GeneMANIA prediction server is a web portal for gathering interactive genes and drawing biological network integration for gene prioritization 12 . The interactive genes were chosen to build a protein‒protein interaction (PPI) network via STRING 11.5, an online database for searching and constructing organism-wide protein association networks 13 . KEGG pathway enrichment analysis and gene ontology (GO) classification were performed to explore the biological functions of the PPI network via the R packages “clusterProfiler” and “ggplot2” 14 - 17 . A p value < 0.05 represents a statistically significant difference. 2.6 Immune infiltration landscape analysis The correlations between the expression level of prognostic disulfidptosis-related genes and immune cell infiltration were analysed by R packages “GSVA (1.46.0)” and “estimate (1.0.13)” with the default parameters 18 . The Deeply Integrated Single-Cell Omics data (DISCO, https://www.immunesinglecell.org/ ) contained comprehensive collections of single-cell RNA-seq datasets of the tumour microenvironment that was used to detect the purity and immune infiltration of GC 19 . TISIDB ( http://cis.hku.hk/TISIDB/index.php ), an online web portal generating multiple heterogeneous data types, was applied to explore immune system interactions and related drugs 20 . Pearson's correlation analysis was performed to determine the association between the expression level of genes and indicators ( P <0.05). 2.7 Construction of competing endogenous RNA regulatory network MicroRNAs (miRNAs) of disulfidptosis-related genes from four prediction databases, including DIANA-microT 2023 ( http://diana.imis.athena-innovation.gr/DianaTools/index.php?r=microT_CDS/index ) 21 , TarBase v.8 ( https://dianalab.e-ce.uth.gr/html/diana/web/index.php?r=tarbasev8 ) 22 , miRDB ( http://mirdb.org/miRDB/ ) 23 , and miRWalk ( http://mirwalk.umm.uni-heidelberg.de/ ). Target miRNAs were defined as miRNAs found in all four databases 24 . Whereafter, target miRNAs were input into mirDIP database ( http://ophid.utoronto.ca/mirDIP/index_confirm.jsp ) and the “Bidirectional” mode was used to filter the very high confidence RNAs 25 . All of twenty data sources were chosen and three or more of the programs as well as the top 1% of the confidence class genes (Very High) were considered as possible target genes. LncBase V.3 ( https://diana.e-ce.uth.gr/lncbasev3 ) helped us to find the long non-coding RNAs (lncRNAs) targeted to the miRNAs 26 . LncRNAs with direct validation and at least 3 experiments validation were deemed as target lncRNAs. The potential correlations between RNA binding proteins and mRNAs were acquired from starBase ( https://rnasysu.com/encori/index.php ) 27 . 2.8 RNA isolation and quantitative real-time PCR (qRT-PCR) All of the RNA was extracted from tissue and plasma using TRIzol reagents and TRIzol LS reagents (Ambion, Carlsbad, CA, USA) in this study. Total RNA was used as a template and reverse transcribed to cDNA using a GoScript Reverse Transcription (RT) System (Promega, Madison, WI, USA) based on the manufacturer's instructions 28 . qRT-PCR was applied with GoTaq qPCR Master Mix (Promega) on the basis of the manufacturer's instructions on an Mx3005P Real-Time PCR System (Stratagene, La Jolla, CA, USA) repeated twice. The reaction conditions were as follows: denaturation at 95 °C for 15 s, annealing at 50 °C for 30 s, and extension at 72 °C for 30 s for 40 cycles, followed by extension at 72 °C for 7 mins. All of the primers were synthesized by Sangon Biotech (Shanghai, China) and the sequences of the primers were as follows: NCKAP1: forward, 5'-TCCTAAATACTGACGCTACAGCA-3', reverse, 5'-GCCTCCTTGCATTCTTATGTC-3'. SLC7A11: forward, 5'-TTACCAGCTTTTTTACGAGTCT-3', reverse, 5'-GTGAGCTTGCAAAAGGTTAAGA-3. GAPDH: forward, 5'-ACCCACTCCTCCACCTTTGAC-3', reverse, 5'-TGTTGCTGTAGCCAAATTCGTT-3'. The fold change of targeted genes was standardized via the Δ C t method (Δ C t = C t gene - C t GAPDH ), in which a higher Δ C t suggests a lower expression level 29 . The ΔΔ C t method (ΔΔ C t = Δ C t GC cell - Δ C t GES-1 ) was used to compare expression levels in GC cell lines to calculate relative expression, and a higher 2 -ΔΔ C t value represents a higher relative expression level 30 . 2.9 Statistical analysis Analyses in this study were used R software (version 4.2.1), cytoscape (version 3.8.0) or GraphPad (version 8.02), and their support packages as mentioned before. P <0.05 was considered significant.
3. Results 3.1 Disulfidptosis-related genes were differentially expressed in GC The RNA-Seq data of 414 GC tissues and 36 para-carcinoma tissues were extracted from the TCGA database, and 174 normal tissues were downloaded from the GTEx database. Our results showed that NCKAP1, RPN1, SLC3A2, and SLC7A11 were consistently overexpressed in GC tissues ( P <0.001) in Figure (Fig. 1 A). Co-expression analysis showed that these genes were significantly elevated, suggesting the internal links of these genes (Fig. 1 B). Similarly, the associations between the expression of MMR genes and disulfidptosis-related genes were revealed in Fig 1 B-D. The mutation data of these genes in GC and GC subtypes were shown in Supplementary Figure 1 ( Fig. S1 ). 3.2 Identification of prognostic disulfidptosis-related genes Obviously, the expression levels of NCKAP1, SLC3A2, and SLC7A11 were remarkably associated with OS in GC by Kaplan-Meier methods, as shown in Fig. 2 A-D. Combined with common clinicopathologic characteristics ( Table S1 -2), univariate regression and multivariate COX regression showed that NCKAP1, SLC7A11, age, sex, pathological T stage, pathological N stage, and pathological M stage were independent risk factors for OS (Table 1 ), which was visualized by a forest plot (Fig. 2 E). Then, the prognostic risk score model was constructed according to the multivariate COX regression: Risk score = (1.455*NCKAP1 exp) + (0.776*SLC7A11 exp). The Kaplan-Meier curve showed that the patients with higher risk factors had poor distal overcome (include NCKAP1 and SLC7A11, Fig. S2 ). 3.3 Construction and validation of the prognostic nomogram model Given the favourable prognostic value of these parameters, we integrated these characteristics and established a prognostic nomogram model to predict the 1-, 3-, and 5-year OS of GC patients, as displayed in Fig. 3 A. The C-index of the nomogram model was 0.681 (0.656-0.707). Subsequently, the nomogram calibration plot (Fig. 3 B) demonstrated that the model was accurately calibrated to the observed probabilities. The DCA curves in Fig. 3 C-E indicated that our nomogram model had satisfactory clinical usefulness. 3.4 NCKAP1 and SLC7A11 are promising screening biomarkers of GC Furthermore, the diagnostic values of both prognostic genes were detected. We first built the ROC curves of NCKAP1 and SLC7A11 from the TCGA cohort in Fig. 4 A. Based on the superior AUC value, we expanded the samples by adding the GTEx cohort for validation. The AUC values in Fig. 4 B are 0.664 (NCKAP1) and 0.698 (SLC7A11). Finally, combination diagnosis was performed to improve the diagnostic efficacy, as shown in Fig. 4 C (AUC= 0.676, 95% CI= 0.631-0.720). 3.5 Biological function analysis of NCKAP1 and SLC7A11 The functions of NCKAP1 and SLC7A11 were further explored. Twenty genes were significantly enriched by NCKAP1 and SLC7A11, and the network output is shown in Fig. 5 A by GeneMANIA. All of the nodes were analysed using STRING, and the PPI network is shown in Fig. 5 B in order to illustrate the protein-protein interaction relationship. Moreover, biological process, molecular function, cellular component, and KEGG pathway analyses were identified and visualized in Fig. S3 and Table S3 . The biological and cellular functions were focused on actin activities, GTPase and immunoreaction. 3.6 Comprehensive evaluation of the immune landscape in GC Based on the immunoreaction of the functional analysis, we further described the immune landscape of NCKAP1 and SLC7A11. Our results showed that the expression of NCKAP1 was significantly associated with the infiltration of immune cells, such as T central memory (Tcm) cells, T helper cells, and plasmacytoid DCs, as shown in Fig. 6 A. Furthermore, the expression level of SLC7A11 correlated with immune cell infiltration, including helper cells, Th2 cells, and neutrophils (Fig. 6 B). Meanwhile, both NCKAP1 and SLC7A11 were related to the ESTIMATE score and immune score (Fig. 6 C-D). Moreover, we systemically evaluated the relationships of both molecules and microenvironment, chemokines, chemokine receptors, immunoinhibitors, immunostimulators, MHC molecules, and target drugs, as shown in Fig. S4 -5. 3.7 Establishment of the ceRNA regulatory network Aiming to reveal the downstream regulatory mechanisms of NCKAP1 and SLC7A11, the interactions of miRNAs and lncRNAs were investigated. Our result showed that NCKAP1 bound to 5 miRNAs and SLC7A11 bound to 3 miRNAs. More interestingly, TUG1 and SNHG6 (2 lncRNAs) as well as 8 mRNAs were co-regulated by NCKAP1 and SLC7A11 via miRNAs (Fig. 7 ), which provided important targets for further research. The expression of the co-regulated targets was detected via TCGA cohort shown in Fig. S6 . In addition, the relationship between RNA binding proteins and NCKAP1, SLC7A11 was shown in Table S4 . Finally, biological process, molecular function, cellular component, and KEGG pathway analyses displayed the potential function of the ceRNA in Fig. S7 . 3.8 Validation of the differential expression and clinical significance of NCKAP1 and SLC7A11 To validate the differential expression of both genes, qRT-PCR was performed to detect the expression levels of NCKAP1 and SLC7A11 in the cell line and tissue line. Our results showed that NCKAP1 and SLC7A11 were both upregulated in GC cells (Fig. 8 A-B). The results of paired GC tissues showed that NCKAP1 and SLC7A11 were also overexpressed in GC tissues (Fig. 8 C-D). The AUC, cut-off line, sensitivity, and specificity of the ROC curve of NCKAP1 were 0.648, 3.16, 66.7%, and 60%, respectively (Fig. 8 E). The AUC, cut-off line, sensitivity, and specificity of the ROC curve of SLC7A11 were 0.699, 6.275, 50.0%, and 93.3%, respectively (Fig. 8 F). All of these results suggested that higher NCKAP1 and SLC7A11 were promising prognostic and diagnostic biomarkers in GC.
4. Discussion Currently, the diagnostic efficacy and prognosis of GC are still not ideal despite developments and breakthroughs in surgery, radiotherapy combined with chemotherapy and immunological regulators 31 , 32 . Moreover, the aetiology and pathogenesis of GC are multifactorial and poorly understood. Hence, it is important to identify novel tumour biomarkers and elucidate the molecular mechanisms of tumour initiation and progression. Abnormal programmed cell death and apoptosis are critical pathways for tumour growth and development. Illustration of uncontrolled proliferation and apoptosis can facilitate recovery of the balance of the cell cycle, which is helpful to design sparking tumour biomarkers and immunotherapy targets. Disulfidptosis is a unique and novel type of cell death that is different from traditional apoptosis and necrosis and triggers cell death by promoting actin polymerization and lamellipodia formation, inducing aberrant accumulation of intracellular disulphides 33 . NCKAP1, RPN1, SLC3A2, and SLC7A11 are the key genes in the progression of disulfidptosis, and their potential to lead to actin network collapse and cell death in GC is unknown. It has been demonstrated that loss of NCKAP1 can affect major actin nucleators in lamellipodia formation in fibroblasts by influencing spreading and focal adhesion dynamics, indicating the role of NCKAP1 in cell migration 34 . Moreover, NCKAP1 significantly inhibited cell proliferation, invasion and migration in clear cell renal cell carcinoma and is a prognostic biomarker for clinical application 35 . The expression level of SLC7A11 is regulated by stress, such as oxidative stress and genotoxic stress, which further induces cell death and apoptosis 36 . However, the clinical value and cell function of these genes in GC remain unclear. In this study, we found that NCKAP1 and SLC7A11 were independent risk factors for GC survival time and established a prognostic nomogram. The validity of the model was proven by a calibration plot and DCA curves. Moreover, we further explored and validated their values towards diagnostic application. The analysis of TCGA samples and clinical samples displayed satisfactory results for the AUC, sensitivity and specificity of NCKAP1 and SLC7A11 overexpression for GC screening. All of the results implied that NCKAP1 and SLC7A11 were potential prognostic and diagnostic biomarkers for GC and are worthy of a larger, multicentre randomized clinical trial. In addition, we revealed the integrated functions of NCKAP1, SLC7A11 and associated genes in a functional network and explored their possible functions via enrichment analysis. These genes and proteins may influence GC development by regulating the activities of actin, energy metabolism of GTPase and immunoreaction, which is closely related to the process of disulfidptosis, as mentioned before. Special immune-related genes can reflect the GC immune microenvironment and predict the efficacy of immune checkpoint inhibitors therapy 37 . In fact, numerous studies have linked high mutation burdens of tumor with immunotherapy responses, and immunotherapy strategies have made progress 38 - 41 . For instance, chemotherapy and pembrolizumab plus trastuzumab display obvious benefits of improving overall survival time in GC patients and are approved as first-line treatments for Her2-positive GC 42 . Nivolumab is a monoclonal antibody inhibitor of PD-1 that has been indicated to provide durable responses with manageable safety in patients with advanced GC who progressed following second-line treatment 43 . However, the exact contribution and durable responses of immunotherapy are still uncertain, and it is necessary to assess the objective response rate as well as novel treatment targets 44 . In this work, we found the relationship of the MMR genes and the expression of NCKAP1 and SLC7A11, which was essential to anti-tumor immunity 45 . the Meanwhile, overexpression of NCKAP1 and SLC7A11 was simultaneously associated with the infiltration of T helper cells, NK CD56dim cells, activated DCs (aDCs), immature DCs (iDCs), T follicular helper cells (TFHs), B cells, and plasmacytoid DCs (pDCs). Subsequently, the expression of SLC7A11 was associated with several drugs, such as riluzole and sulfasalazine, to regulate downstream genes, which provided useful information and directions for future clinical research 46 . Subsequently, we systematically delineate an overall immune landscape related to treatment, which provides promising immune treatment targets with the ultimate goal of improving clinical outcomes and survivorship. With the emerging appreciation for the significance of ncRNAs, the present studies pay attention to determine the roles of ceRNA in the process of participating tumor initiation and progress 47 . miRNAs play important roles in cancer-related immune regulation whose expression correlates with tumor mutation burden and immune regulation 48 . Meanwhile, lncRNAs can function as competing endogenous RNAs to impair the miRNA inhibition on targeted mRNAs, further regulating gene expression, protein translation and malignant biological properties 49 . Recent study has been demonstrated that disulfidptosis-associated lncRNAs have potential to predict the prognosis, tumor microenvironment, and immunotherapy and chemotherapy options in colon adenocarcinoma, which strongly implies the significance of the correlation between lncRNA and disulfidptosis-associated genes 50 . Our ceRNA network showed the interactions of co-regulated lncRNAs and mRNAs of NCKAP1 and SLC7A11, which points out the direction of future research of downstream regulation signal pathway. The limitations of our present study were the lack of functional verification results, which will be assessed in future studies.
5. Conclusion In conclusion, NCKAP1 and SLC7A11 are promising prognostic and diagnostic biomarkers for GC, which correlate with the activities of actin, energy metabolism of GTPase, immune infiltration and immunotherapy.
* Authors contributed equally to this work. Competing Interests: The authors have declared that no competing interest exists. Background: Worldwide, gastric cancer (GC) remains intractable due to its poor prognosis and high morbidity and mortality. Disulfidptosis is a novel kind of cell death mediated by abnormal accumulation of intracellular disulphides. The correlation between disulfidptosis and GC is still unknown. Therefore, it is necessary to elucidate the pathogenesis and mechanism of disulfidptosis and GC for clinical diagnosis and intervention. Methods: RNA-sequencing data from several public data portals and clinical samples were collected. We compared the expression levels of four key genes of disulfidptosis, including SLC7A11, SLC3A2, RPN1, and NCKAP1, in GC and selected prognostic genes to build a novel GC prognosis-related nomogram model. The biological functions and immune landscape of the identified prognostic genes were explored. Results: Overexpressed NCKAP1 and SLC7A11 were prognostic disulfidptosis-related genes in GC. We combined these genes and several clinicopathological factors to build a prognostic nomogram model for GC. Meanwhile, the ROC curves showed that NCKAP1 and SLC7A11 were promising biomarkers for GC screening. The biological and cellular functions were focused on actin activities, GTPase and immunoreaction. The tumour immune microenvironment and immune therapy targets were identified. Competing endogenous RNA network was built to explore the downstream regulatory mechanisms. Finally, the elevated NCKAP1 and SLC7A11 expression in GC was validated via qRT-PCR in a cell line and tissue line. Conclusion: In conclusion, NCKAP1 and SLC7A11 are promising prognostic and diagnostic biomarkers for GC that correlate with the activities of actin, energy metabolism of GTPase, immune infiltration and immunotherapy.
Supplementary Material
We thank all contributors of high-quality data to these accessible public databases. We thank Home for Researchers editorial team (www.home-for-researchers.com) for their language editing service. Funding This study was supported by grants from the Key Scientific and Technological Projects of Ningbo (No. 2021Z133), the Natural Science Foundation of Ningbo (No. 202003N4198), the Affiliated Hospital of Medical School of Ningbo University Youth Talent Cultivation Program (No. FYQMKY202001), Ningbo Health Technology Project (No. 2020Y13) and the Medical and Health Research Project of Zhejiang Province (No. 2021KY892, No. 2024KY1515). Ethical approval This study was performed in line with the principles of the Declaration of Helsinki. All participants gave written informed consent to take part in the present study. This study was approved by the Ethics Committee of The First Affiliated Hospital of Ningbo University (No. KY20220101) . Author contributions Y . F . S. designed the study and critically reviewed the manuscript. J . N . Y . downloaded and analyzed the data. M.Q.S performed cell cultures. S . K . Z . and C . L . J . extracted RNA and performed the qRT-PCR studies. J.N.Y. and Z . Y . F . drew diagrams and wrote the manuscript. Q . E . L . administrated the project and provided the funding. C.T. helped to check the spelling and polished the manuscript. J.N.Y. and Z . Y . F . contributed equally to this work. The final manuscript has been approved by all authors. Availability of data and materials The datasets that support the findings of the current study are available in the TCGA, [ https://tcga-data.nci.nih.gov/ ], GTEx, [ https://www.gtexportal.org/home/index.html ] and GEO [ https://www.ncbi.nlm.nih.gov/gds ] databases. The datasets analyzed during the current study are available in the supplementary data. The data that support the findings of this study are available from the corresponding author upon reasonable request.
CC BY
no
2024-01-16 23:43:49
J Cancer. 2024 Jan 1; 15(4):1053-1066
oa_package/35/fe/PMC10788733.tar.gz
PMC10788734
0
Conclusions External validation studies should be highly valued by the research community. A model is never completely validated, 3 79 because its predictive performance could change across target settings, populations, and subgroups, and might deteriorate over time owing to improvements in care (leading to calibration drift). Thus, external validation studies should be viewed as a necessary and continual part of evaluating a model’s performance. In the next article in this series, we describe how to calculate the sample size required for such studies. 21
External validation studies are an important but often neglected part of prediction model research. In this article, the second in a series on model evaluation, Riley and colleagues explain what an external validation study entails and describe the key steps involved, from establishing a high quality dataset to evaluating a model’s predictive performance and clinical usefulness.
A clinical prediction model is used to calculate predictions for an individual conditional on their characteristics. Such predictions might be of a continuous value (eg, blood pressure, fat mass) or the probability of a particular event occurring (eg, disease recurrence), and are often in the context of a particular time point (eg, probability of disease recurrence within the next 12 months). Clinical prediction models are traditionally based on a regression equation but are increasingly derived using artificial intelligence or machine learning methods (eg, random forests, neural networks). Regardless of the modelling approach, part 1 in this series emphasises the importance of model evaluation, and the role of external validation studies to quantify a model’s predictive performance in one or more target population(s) for model deployment. 1 Here, in part 2, we describe how to undertake such an external validation study and guide researchers through the steps involved, with a particular focus on the statistical methods and measures required, complementing other existing work. 2 3 4 5 6 7 8 9 10 11 12 13 These steps form the minimum requirement for external validation of any clinical prediction models, including those based on artificial intelligence, machine learning or regression. What do we mean by external validation? External validation is the evaluation of a model’s predictive performance in a different (but relevant) dataset, which was not used in the development process. 1 5 7 14 15 16 17 18 It does not involve refitting the model to compare how the refitted model equation (or its performance) changes compared to the original model. Rather, it involves applying a model as originally specified and then quantifying the accuracy of the predictions made. Five key steps are involved: obtaining a suitable dataset, making outcome predictions, evaluating predictive performance, assessing clinical usefulness, and clearly reporting findings. In this article, we outline these steps, using real examples for illustration. Step 1: Obtaining a suitable dataset for external validation The first step of an external validation study is obtaining a suitable, high quality dataset. What quality issues should be considered in an external validation dataset? A high quality dataset is more easily attained when initiating a prospective study to collect data for external validation, but this approach is potentially time consuming and expensive. The use of existing datasets (eg, from electronic health records) is convenient and often cheaper but is of limited value if the quality is low (eg, predictors are missing, outcome or predictor measurement methods do not reflect actual practice, or time of event is not recorded). Also, some existing datasets have a narrower case mix than the wider target population owing to specific entry criteria; for instance, UK Biobank is a highly selective cohort, restricted to individuals aged between 40 and 69—therefore, its use for external validation would leave uncertainty about a model’s validity for the wider population (including those aged <40 or >69). To help judge whether an existing dataset is suitable for use in an external validation study, we recommend using the signalling questions within the Prediction model Risk Of Bias ASsessment Tool (PROBAST) domains for Participant Selection, Predictors and Outcome ( box 1 ). 19 20 Fundamentally, the dataset should be fit for purpose, such that it represents the target population, setting, and implementation of the model in clinical practice. For instance, it should have patient inclusion and exclusion criteria that match those in the target population and setting for use (eg, in the UK, prediction models intended for use in primary care might consider databases such as QResearch, Clinical Practice Research Datalink, Secure Anonymised Information Linkage, and The Health Improvement Network); measure predictors at or before the start point intended for making predictions; ensure measurement methods (for predictors and outcomes) reflect those to be used in practice; and have suitable follow-up information to cover the time points of interest for outcome prediction. It should also have a suitable sample size to ensure precise estimates of predictive performance (see part 3 of our series), 21 and ideally the amount of missing data should be small (see section on dealing with missing data, below). What population and setting should be used for external validation of a prediction model? Researchers should focus on evaluating a model’s target validity, 1 3 such that the validation study represents the target population and setting in which the model is planned to be implemented (otherwise it will have little value). The validation study might include the same populations and settings that were used to develop the model. However, it could be a deliberate intention to evaluate a model’s performance in a different target population (eg, country) or setting (eg, secondary care) than that used in model development. For this reason, multiple external validation studies are conducted for the same model, to evaluate performance across different populations and settings. For example, the predictive performance of the Nottingham Prognostic Index has been evaluated in many external validation studies. 22 The more external validations that confirm good performance of a model in different populations and settings, the more likely it will be useful in untested populations and settings. Most external validation studies are based on data that are convenient (eg, already available from a previous study) or easy to collect locally. As such, they often only evaluate a model’s performance in a specific target setting or (sub)population. To help clarify the scope of the external validation, Debray et al 5 recommend that researchers should quantify the relatedness between the development and validation datasets, and to make it clear whether the focus of the external validation is on reproducibility or transportability. Reproducibility relates to when the external validation dataset is from a population and setting similar to that used for model development. Reproducibility is also examined when applying internal validation methods (eg, cross validation, bootstrapping) to the original development data during the model development, as discussed in our first paper. 1 Conversely, transportability relates to external validation in an intended different population or setting, for which model performance is often expected to change owing to possible differences in predictor effects and the participant case mix compared with the original development dataset (eg, when moving from a primary care to a secondary care setting). What information needs to be recorded in the external validation dataset? At a minimum, the external validation dataset must contain the information needed to apply the model (ie, to make predictions) and make comparisons to observed outcomes. This required information means that, for each participant, the dataset should contain the outcome of interest and the values of any predictors included in the model. For time-to-event outcomes, any censoring times (ie, end of follow-up) and the time of any outcome occurrence should also be recorded. Fundamentally, the outcome should be reliably measured, and the recorded predictor information must reflect how, and the moment when, the model will be deployed in practice. For example, for a model to be used before surgery to predict 28 day mortality after surgery, it should use predictors that are available before surgery, and not any perioperative or postoperative predictors. Step 2: Making predictions from the model Once the external validation dataset is finalised (ready for analysis), the next step is to apply the existing prediction model to derive predicted values for each participant in the external validation dataset. This step should not be done manually, but rather done by using appropriate (statistical) code that can be programmed to apply the model to each participant in the external validation dataset and compute predicted outcome values based on their multiple predictor values. For some models, typically those based on black box artificial intelligence or machine learning methods, by design they can only be made directly available (by the model developers) as a software object, or accessible via a specific system or server. Figure 1 illustrates the general format of using regression based prediction models to estimate outcome values or event probabilities (risks), and figure 2 and figure 3 provide two case studies. Figure 2 shows a prediction model developed using the US West subset of the GUSTO-I data (2188 individuals, 135 events), which estimates the probability of 30 day mortality after an acute myocardial infarction. 23 The logistic regression model includes eight predictors (with 10 predictor parameters, since an additional two parameters are required to capture the categorical Killip classification). For illustration of externally validating this model, we use the remaining data from the GUSTO-I dataset (with thanks to Duke Clinical Research Institute), 23 which contains all eight predictor variables and outcome information for 38 642 individuals. Figure 3 shows a prediction model for calculating the five year probability of a recurrence in patients with a diagnosis of primary breast cancer. This survival model was developed for illustrative purposes in 1546 (node positive) participants (974 events) from the Rotterdam breast cancer study, 18 24 including eight predictors with 10 predictor parameters. External validation is carried out using data from the German Breast Cancer Study Group, which contains all eight predictor variables and outcome information for 686 patients (with 299 events). 18 25 26 27 Once the predictions have been calculated for each participant, it is good practice to summarise their observed distribution, for example, as a histogram, with summary statistics such as the mean and standard deviation. This presentation is illustrated for the two examples in figure 2 and figure 3 , separately for those individuals with and without the outcome event. Step 3: Quantifying a model’s predictive performance The third step is to quantify a model’s predictive performance in terms of overall fit, calibration, and discrimination. This step requires suitable statistical software, which is discussed in supplementary material S1, 28 29 30 31 32 and example code is provided at www.prognosisresearch.com/software . Overall fit Overall performance of a prediction model for a continuous outcome is quantified by R 2 , the proportion of the total variance of outcome values that is explained by the model, with values closer to 1 preferred. Often this value is multiplied by 100, to give the percentage of variation explained. Generalisations of R 2 for binary or time-to-event outcomes have also been proposed, such as the Cox-Snell R 2 (this has a maximum value below 1), 33 Nagelkerke’s R 2 (a scaled version of the Cox-Snell R 2 , which has a maximum value of 1), 34 O’Quigley’s R 2 , 35 Royston’s R 2 , 36 and Royston and Sauerbrei’s R 2 D . 37 We particularly recommend reporting the Cox-Snell R 2 value, as it is needed in sample size calculations for future model development studies. 38 Another overall measure of fit is the mean squared error of predictions, which for continuous outcomes can be obtained on external validation by calculating the mean of the squared difference between participants’ observed outcomes and their estimated (from the model) outcomes. An extension of the mean square error for binary or time-to-event outcomes is the Brier score, 39 40 which compares observed outcomes and estimated probabilities. Overall fit performance estimates are shown for the two examples in table 1 . Calibration plots Calibration refers to the assessment of whether observed and predicted values agree. 41 For example, whether observed event probabilities agree with a model’s estimated event probabilities (risks). Although an individual’s event probability cannot be observed (we only know if they had the outcome event or not), we can still examine calibration of predicted and observed probabilities by deriving smoothed calibration curves fitted using all the individuals’ observed outcomes and the model’s estimated event probabilities ( fig 4 and fig 5 ). At external validation, some miscalibration between the predicted and observed values should be anticipated. The more different the validation dataset is compared with the development dataset (eg, in terms of population case mix, outcome event proportion, timing and measurement of predictors, outcome definition), the greater the potential for miscalibration. Similarly, models developed using low quality approaches (eg, small datasets, unrepresentative samples, unpenalised rather than penalised regression) have greater potential for miscalibration on external validation. Calibration should be examined across the entire range of predicted values (eg, probabilities between 0 to 1), and at each relevant time point for which predictions are being made. Van Calster et al outline a hierarchy of calibration checks, 42 ranging from the overall mean to subgroups defined by patterns of predictor values. Fundamentally, calibration should be visualised graphically using a calibration plot that compares observed and predicted values in the external validation dataset, and the plot must include a smoothed flexible calibration curve (with a confidence interval) as fitted in the individual data using a smoother or splines. 42 43 Many researchers, however, do not report a calibration plot, 44 and those that do tend to only report grouped points rather than a calibration curve across the entire range. Grouping can be gamed (eg, by altering the number of groups), only reveals calibration in the ill defined groups themselves, and caps the calibration assessment at the average predicted value in the lowest and highest group. Hence, grouping enables researchers to (deliberately) obfuscate any meaningful assessment of miscalibration in particular ranges of predicted values (an example is shown below). A calibration curve provides a more complete picture. For continuous outcomes, the calibration plot and smoothed curve can be supplemented by presenting the pair of observed (y axis) against predicted (x axis) values for all participants. For binary or time-to-event outcomes, observed (y axis) event probabilities against the model’s estimated event probabilities (x axis) can be added for groups defined by, for example, 10ths or 20ths of the model’s predictions—again, to supplement (not replace) a smoothed calibration curve. 43 The calibration plot should be presented in a square format, and the axes should not be distorted (eg, by changing the scale of one of the axes, or having uneven spacing across the range of values) as this could hide miscalibration in particular regions. Researchers should also add the distribution of the predicted values underneath the calibration plot, to show the spread of predictions in the validation dataset, perhaps even for each of the event and non-event groups separately. If censoring occurs before the time point of interest in the validation dataset, then the true outcome event status is unknown for the censored individuals, which makes it difficult to directly plot the calibration of model predictions at the time point of interest. A common approach is to create groups (eg, 10 groups defined by tenths of the model’s estimated event probabilities), and to plot the model’s average estimated probability against the observed (1–Kaplan-Meier) event probability for each group. However, this approach is unsatisfactory, because the number of groups and the thresholds used to define them are arbitrary; hence, it only provides information on subjectively chosen groups of participants and does not provide granular information on calibration or miscalibration at specified values or ranges of predicted values. To manage this problem, a smoothed calibration curve can be plotted that examines calibration across the entire range of predicted values (analogous to the calibration plot for binary outcomes) at a particular time point. This approach can be achieved using pseudo-observations (or pseudo-values), 45 46 47 48 or flexible adaptive hazard regression or a Cox model using restricted cubic splines. 49 More details are provided in supplementary material S2. Calibration plots and curves for the two examples are shown in figure 4 and figure 5 . The calibration plot for the binary outcome example ( fig 4 ) shows good calibration for event probabilities between 0 and 0.15. For calculated event probabilities beyond 0.2, the model overestimates the probability of mortality, as revealed by the smoothed calibration curve lying below the diagonal line. Had only grouped points been included (and not a smoothed curve across individuals), the extent of the miscalibration in the range of model predictions above 0.2 would be hidden. For example, consider if the calibration had been checked for 10 groups based on tenths of predicted values (see 10 circles in fig 4 ). Because most of the data involve patients with model predictions less than 0.2, nine of the 10 groups fall below predictions of 0.2. Further, the model’s estimated probabilities in the upper group have a mean of about 0.4, and information above this value is completely lost, incidentally where the miscalibration is most pronounced based on the smoothed curve across all individuals. Therefore, figure 4 demonstrates our earlier point that categorising into groups loses and hides information, and that the calibration curve is essential to show information across the whole range of predictions, including values close to 1. Although a well calibrated model is ideal, a miscalibrated model might still have clinical usefulness. For example, in figure 4 , miscalibration is most pronounced in regions where the model’s estimated mortality risks are very high (eg, >0.3), with actual observed risks about 0.05 to 0.3 lower. However, in this setting, whether a patient is deemed to have high or very high mortality risks is unlikely to change clinical decisions for that patient. By contrast, in regions where clinical risk thresholds are more relevant (eg, predictions ranging from 0.05 to 0.1), calibration is very good and so the model might still be useful in clinical practice despite the miscalibration at higher risks (see step 4). The calibration plot for the time-to-event outcome example shows that the predictions are systematically lower than the observed event risk at five years ( fig 5 ), with most of the calibration curve lying above the diagonal. In particular, for predictions between 0.1 and 0.8, the model appears to systematically underestimate the probability of recurrence within five years of a breast cancer diagnosis. The calibration curve’s confidence interval is important to reveal the precision of the calibration assessment. It also quantifies the uncertainty of the actual risk in a group of individuals defined by a particular predicted value. For example, for the group of individuals with an estimated risk of 0.8 in figure 5 , the 95% confidence interval around the curve suggests that this group’s actual risk is likely between 0.78 to 1. Quantifying calibration performance Calibration plots with calibration curves should also be supplemented with statistical measures that summarise the calibration performance observed in the plot. 50 Calibration should not be assessed using the Hosmer-Lemeshow test, or related ones like the Nam-D’Agostino test or Gronnesby-Borgan test, because these require arbitrary grouping of participants that, along with sample size, can influence the calculated P value, and does not quantify the actual magnitude or direction of any miscalibration. Rather, calibration should be quantified by the calibration slope (ideal value of 1), calibration-in-the-large (ideal value of 0) and—for binary or time-to-event outcomes—the observed/expected (O/E) ratio (ideal value of 1) or conversely the E/O ratio. A detailed explanation for each of these measures is given in supplementary material S3. Estimates of these measures should be reported alongside confidence intervals, and derived for the dataset as a whole and, ideally, also for key subgroups (eg, different ethnic groups, regions). To quantify overall miscalibration based on the calibration curve, the estimated or integrated calibration index can be used, which respectively measure an average of the squared or absolute differences between the estimated calibration curve and the 45 degree (diagonal) line of ideal calibration. 51 52 Calibration measures are summarised in table 1 for the two examples, which confirm the visual findings in the calibration plots. For example, the binary outcome prediction model has a calibration slope of 0.72 (95% confidence interval 0.70 to 0.75), suggesting that predictions are too extreme; this is driven by the overprediction in those with estimated event probabilities above 0.2 ( fig 4 ). The time-to-event prediction model has an O/E ratio of 1.27, suggesting that the observed event probabilities are systematically higher than the model’s estimated values, which is seen by the smoothed calibration curve lying mainly above the diagonal line ( fig 5 ). Such situations could motivate model updating to improve calibration performance. 53 The results also emphasise how one calibration measure alone does not provide a full picture. For example, the calibration slope is close to 1 for the time-to-event prediction model (1.10, 95% confidence interval 0.88 to 1.33), but there is clear miscalibration owing to the O/E ratio of 1.27 (1.22 to 1.32). Conversely, O/E ratio is 1.01 (1.01 to 1.02) in the binary outcome example, suggesting good overall agreement, but the calibration slope is 0.72 (0.70 to 0.75) owing to the overestimation of high risks ( fig 4 ). Hence, all measures of calibration should be reported together and—fundamentally—alongside a calibration plot with a smoothed calibration curve. Quantifying discrimination performance Discrimination refers to how well a model’s predictions separate between two groups of participants: those who have (or develop) the outcome and those who do not have (or do not develop) the outcome. Therefore, discrimination is only relevant for prediction models of binary and time-to-event outcomes, and not continuous outcomes. Discrimination is quantified by the concordance (c) statistic (index), 11 54 and a value of 1 indicates the model has perfect discrimination, while a value of 0.5 indicates the model discriminates no better than chance. For binary outcomes, it is equivalent to the area under the receiver operating characteristic curve (AUROC) curve. It gives the probability that for any randomly selected pair of participants, one with and one without the outcome, the model assigns a higher probability to the participant with the outcome. What constitutes a high c statistic is context specific; in some fields where strong predictors exist, a c statistic of 0.8 might be considered high, but in others where prediction is more difficult, values of 0.6 might be deemed high. The c statistic also depends on the case mix distribution. Presenting an ROC curve over and above the c statistic (AUROC) has very little, if any, benefit. 55 56 Similarly, providing traditional measures of test accuracy such as sensitivity and specificity are not as relevant for prediction models, because the focus should be on the overall performance of the model’s predictions without forcing thresholds to define so-called high and low groups. If thresholds are important for clinical decision making, then clinical utility should be assessed at those thresholds, for example, using net benefit and decision curves (see step 4). Generalisations of the c statistic have been proposed for time-to-event models, most notably Harrell’s C index, but many other variants are available, including Efron’s estimator, Uno’s estimator, Göner and Heller’s estimator, and case mix adjusted estimates. 54 57 Royston’s D statistic is another measure of discrimination, 37 interpreted as the log hazard ratio comparing two equally sized groups defined by dichotomising the (assumed normally distributed) linear predictor from the developed model at the median value. Higher values for the D statistic indicate greater discrimination. Harrell’s C index and Royston’s D statistic measure discrimination over all time points up to a particular time point (or end of follow-up). However, usually an external validation study aims to examine a model’s predictive performance at a particular time point, and so time dependent discrimination measures are more informative, such as an inverse probability of censoring weighted estimate of the time dependent area under the ROC curve for the time point of interest (t). 58 Discrimination performance for the two examples is shown in table 1 , and show promising discrimination in both cases. For the binary outcome example, the model correctly identifies 80.8% concordant pairs (c statistic 0.81, 95% confidence interval 0.80 to 0.82). The time-to-event example has a Harrell’s C index of 0.67 (0.64 to 0.70) and a time dependent AUROC curve of 0.71 (0.65 to 0.76), suggesting that the model’s discrimination at five years is slightly higher than the discrimination performance averaged across all time points. Step 4: Quantifying clinical utility Where the goal is for predictions to direct decision making, a prediction model should also be evaluated for its overall benefit on participant and healthcare outcomes; also known as its clinical utility. 16 59 60 For example, if a model estimates a patient’s event probability above a certain threshold value (eg, >0.1), then the patient and their healthcare professionals could decide on some clinical action (eg, above current clinical care), such as use of a particular treatment, monitoring strategy, or lifestyle change. When externally validating the model, the clinical utility of this approach can be quantified by the net benefit, a measure that weighs the benefits (eg, improved patient outcomes) against the harms (eg, worse patient outcomes, additional costs). 61 62 It requires the researchers to choose a probability (risk) threshold, at or above which there will be a clinical action. The threshold should be chosen before a clinical utility analysis, based on discussion with clinical experts and patient focus groups, and indeed there might be a range of thresholds of interest, because a single threshold is unlikely to be acceptable for all clinical settings and individuals. Then, a decision curve can be used to display a model’s net benefit across the range of chosen threshold values, and compared with other decision making strategies (eg, other models, or options such as treat all and treat none). Further explanation is provided in supplementary material S4, and more detailed guidance is provided in previous tutorials. 61 62 63 We apply this clinical utility step to the two examples in figure 6 and figure 7 , and show results across the entire 0 to 1 probability range for illustration, although in practice a narrower range would be predetermined by clinical and patient groups, as mentioned. Figure 6 shows that the binary outcome model has a positive net benefit for all thresholds below 0.44, where clinical thresholds are likely to fall in this clinical setting, with greater net benefit than the treat all strategy at all thresholds. Figure 7 time-to-event outcome model has a positive net benefit for thresholds up to 0.79, but does not provide added benefit over the treat all strategy if key thresholds fall below 0.38. Step 5: Clear and transparent reporting The Transparent Reporting of a multivariable model for Individual Prognosis Or Diagnosis (TRIPOD) statement provides guidance on how to report studies validating a multivariable prediction model. 50 64 For example, the guidance recommends specifying all measures calculated to evaluate model performance and, at a minimum, to report calibration (graphically and quantified) and discrimination, along with corresponding confidence intervals. With the introduction of new sample size criteria for both developing and validating prediction models, 21 38 65 66 67 68 69 70 71 we also recommend reporting either the Cox-Snell or Nagelkerke R 2 , and the distribution of the linear predictor (eg, histograms for those with and without the outcome event, as shown in fig 2 and fig 3 , and at the base of the plots in fig 4 and fig 5 ). These additional reporting recommendations not only provide information on the performance of the model but also provide researchers with key information needed to estimate sample sizes for further external validation, model updating, or when developing new models. 38 65 66 68 Special topics Dealing with missing data The external validation dataset might contain missing data in some of the predictor variables or the outcome. A variety of methods are available to deal with missing data, including analysis of complete cases, single imputation (eg, mean or regression imputation), and multiple imputation. Handling of missing data during external validation is an unresolved topic and an area of active research. 72 73 74 Occasionally the model developers will specify how to deal with missing predictor values during model deployment; in that situation, the external validation should primarily assess that recommended strategy. However, most existing models do not specify or even consider how to deal with missing predictor values at deployment, and an external validation might then need to examine a range of plausible options, such as single or multiple imputation. Checking subgroups and algorithmic fairness An important part of external validation is to check a model’s predictive performance in key clusters (eg, countries, regions) and subgroups (eg, defined by sex, ethnic group), for example, as part of examining algorithm fairness. This is discussed in more detail in paper 1 of our series. 1 Multiple external validation studies and individual participant data meta-analyses Where interest lies in a model’s transportability to multiple populations and settings, multiple external validation studies are often needed. 5 75 76 77 Then, not only is the overall (average) model performance of interest, but also the heterogeneity in performance across the different settings and populations. 5 Heterogeneity can be examined through data sharing initiatives and by using individual participant data meta-analyses, as described elsewhere. 4 78 Competing events Sometimes competing events can occur that prevent a main event of interest from being observed, such as death before a second hip replacement. In this situation, if a model’s predictions are to be evaluated in the context of the real world (ie, where the competing event will reduce the probability of the main event from occurring), then the predictive performance estimates must account for the competing event in the statistical analysis (eg, when deriving calibration curves). This topic is covered in a related paper in The BMJ on validation of models in competing risks settings. 9
Extra material supplied by authors Data availability statement The GUSTO-I dataset is freely available, for which we kindly acknowledge Duke Clinical Research Institute. It can be installed in R by typing: load(url(' https://hbiostat.org/data/gusto.rda' )).
CC BY
no
2024-01-16 23:43:50
BMJ. 2024 Jan 15; 384:e074820
oa_package/78/c6/PMC10788734.tar.gz
PMC10788772
38055871
This is a correction to: Qizhi Pei, Lijun Wu, Jinhua Zhu, Yingce Xia, Shufang Xie, Tao Qin, Haiguang Liu, Tie-Yan Liu, Rui Yan, Breaking the barriers of data scarcity in drug-target affinity prediction, Briefings in Bioinformatics , Volume 24, Issue 6, November 2023, bbad386, https://doi.org/10.1093/bib/bbad386 In the originally published version of this manuscript, there were discrepancies in the affiliations of the authors. The correct affiliations are as follows: Qizhi Pei: Gaoling School of Artificial Intelligence, Renmin University of China, No.59, Zhong Guan Cun Avenue, Haidian District, 100872, Beijing, China and Engineering Research Center of Next-Generation Intelligent Search and Recommendation, Ministry of Education Tao Qin: Microsoft Research AI4Science, No.5, Dan Ling Street, Haidian District, 100080, Beijing, China Rui Yan: Gaoling School of Artificial Intelligence, Renmin University of China, No.59, Zhong Guan Cun Avenue, Haidian District, 100872, Beijing, China and Beijing Key Laboratory of Big Data Management and Analysis Methods. This error has been corrected online.
CC BY
no
2024-01-16 23:43:50
Brief Bioinform. 2023 Dec 6; 25(1):bbad477
oa_package/d6/0c/PMC10788772.tar.gz
PMC10788773
38113080
This is a correction to: Mohammad Rizwan Alam, Kyung Jin Seo, Jamshid Abdul-Ghafar, Kwangil Yim, Sung Hak Lee, Hyun-Jong Jang, Chan Kwon Jung, Yosep Chong, Recent application of artificial intelligence on histopathologic image-based prediction of gene mutation in solid cancers, Briefings in Bioinformatics , Volume 24, Issue 3, May 2023, bbad151, https://doi.org/10.1093/bib/bbad151 In the originally published version of this manuscript, the funding information was incorrect: It should be written as follows, ``This research was supported by a grant from the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number : HI21C0940).'' This error has been corrected online.
CC BY
no
2024-01-16 23:43:50
Brief Bioinform. 2023 Dec 18; 25(1):bbad503
oa_package/5c/ba/PMC10788773.tar.gz
PMC10788774
38225982
Introduction Seeds serve as the foundation for plant reproduction and the continuity of generations. Larger seeds are thought to enhance the ability of plant seedlings to withstand external stresses and promote seedling survival, while plants with smaller seeds compensate for the relatively lower individual seed survival rate by producing a larger quantity of seeds [ 1 ]. Seed size plays a critical role in plant adaptation to the environment and significantly influences crop yield [ 2 ]. The regulation of seed size genes and genetic networks has been comprehensively studied. Plants regulate seed size through two primary mechanisms: maternal tissue-mediated control and syncytial tissue-mediated control. Several signaling pathways have been identified as regulators of seed size via modulation of maternal tissue growth. These pathways encompass the MAPK signaling pathway, the G-protein signaling pathway, the ubiquitin-proteasome pathway, phytohormone induction, and transcription factor regulation. The MAPK cascade reaction involves three classes of protein kinases: an MAPK kinase kinase (MKKK), an MAPK kinase (MKK), and an MAPK [ 3 ]. Arabidopsis thaliana MKK4 and MKK5 govern seed embryo development, and the double mutants display the characteristic of reduced seed size [ 4 ]. The heterotrimeric G protein complex comprises Gα, Gβ, and Gγ subunits. Overexpression of the A. thaliana AGG3 gene, which encodes the Gγ subunit, leads to an increase in seed and organ size, while agg3 mutants exhibit markedly reduced seed and organ sizes, indicating the promotion of Arabidopsis seed and organ growth by AGG [ 5 , 6 ]. DA1 , an Arabidopsis ubiquitin receptor, exerts negative regulation on seed size by modulating cell proliferation in the seed coat [ 7 ], while DA2 and ENHANCER OF DA1 ( EOD1 )/ BIG BROTHER ( BB ) negatively regulate seed size through their interaction with DA1 [ 8 , 9 ]. It has been demonstrated that brassinosteroids (BRs) and auxins (IAA) play a role in regulating seed growth. The Arabidopsis BSU1 gene is involved in BR signaling and promotes cell elongation and division [ 10 ]. Within the transcription factor regulatory pathway, the OsGRF4 gene primarily enhances seed size by promoting cell expansion and, to a lesser extent, cell proliferation [ 11 ]. APETALA2 ( AP2 ), through the maternal sporophyte and endosperm genome, controls seed weight and seed yield [ 12 , 13 ]. Syncytial tissue growth additionally influences seed size, with studies indicating the involvement of the HAIKU ( IKU ) pathway and certain phytohormones in the regulation of seed size through their impact on endosperm development. Arabidopsis haiku1 ( iku1 ), iku2 , and miniseed3 ( mini3 ) mutants produce small seeds. Pollination of these mutants with wild-type (WT) pollens results in normal-sized seeds, suggesting that IKU1 , IKU2 , and MINI3 function zygotically to regulate seed growth [ 14 , 15 ]. Yellow horn ( Xanthoceras sorbifolium Bunge) is a woody oilseed tree species, with its seeds being the primary oil-producing organs. The seed oil is characterized by exceptionally high levels of unsaturated fatty acids, especially oleic acid (~40%) and linoleic acid (~30%). Of greater significance, yellow horn oil contains ~3% of nervonic acid, a unique component not found in other plant oils [ 16 ]. Nervonic acid plays a crucial role in repairing damaged neural cells and promoting infant brain development [ 17 , 18 ]. In addition to its application as an edible oil, yellow horn oil serves as an excellent raw material for biodiesel production [ 16 , 19 , 20 ]. In addition, the oil-extracted seed cake, due to its abundant protein content, can also serve as feed for livestock or pets [ 19 ]. Therefore, yellow horn seeds have great commercial value [ 17 ]. Regrettably, compared with other major woody oilseed tree species, such as Idesia polycarpa Maxim., Juglans regia L., and Canarium oleosum , the yield of yellow horn remains relatively low, which constitutes one of the limiting factors for its development. Therefore, enhancing seed production is a crucial and urgent breeding objective for yellow horn. Substantial efforts have been made to improve yellow horn seed yield in recent times. For instance, at the physiological level, one study has shown that the quality of male parent pollen influences fruit set rate, and artificial selection through pollination can increase fruit set, consequently boosting seed production [ 21 ]. Another investigation has revealed a highly significant negative correlation between seed size and altitude, suggesting the potential for increasing seed yield through geographic transplantation [ 22 ]. A separate study also explored the impact of the canopy microclimate on seed yield, observing a significant positive correlation between light intensity, temperature, and seed yield [ 23 ]. Moreover, at the molecular genetic level, the ethylene receptor gene ( XsERS ) and the superoxide dismutase gene ( XsSOD ) have been identified as candidate genes affecting early ovule development [ 24 , 25 ]. Certain miRNA interaction modules, such as miR172b-ARF2 and miR7760-p3_1-AGL61 , have been identified as potential regulatory modules for seed development and lipid synthesis [ 26 ]. In summary, research on yellow horn seed yield is currently limited, with most studies focusing on physiological aspects. Even in the realm of molecular genetics, research has primarily stopped at candidate gene identification, lacking further functional validation and exploration of regulatory mechanisms. However, seed yield is a complex trait influenced by multiple factors, making it challenging to comprehensively address all aspects. In this study, we focused specifically on three crucial factors, hundred-grain weight (HWG), seed mass of single fruit (SFSM), and seed number of single fruit (SFSN), which significantly influence seed yield. Based on resequencing data from 222 yellow horn germplasms, we first explored the population structure and kinship relationships among these germplasms, and then performed genome-wide association analysis on the phenotypic data of these three traits over two years (2022 and 2020). By analyzing significant SNP loci, we successfully identified a candidate gene that regulates both hundred-grain weight and single-fruit seed mass. Subsequently, we verified the function of this gene in detail through Arabidopsis transgenic, RT–qPCR, subcellular localization, and seed embryo microscopic observation experiments. These findings provide important insights for investigating seed size in yellow horn and for molecular breeding of high-yielding cultivars.
Materials and methods Phenotype data collection and quality control Following seed maturation, a total of 143 and 222 sample trees were surveyed at the yellow horn Germplasm Resource Nursery, located in Tongliao City within the Inner Mongolia Autonomous Region, China, during the years 2020 and 2022, respectively. (Special note: the 222 sample trees surveyed in 2022 fully covered the 143 sample trees surveyed in 2020.) Six fruits were harvested from each sample tree, and each fruit served as a biological replicate. The total mass of single-fruit seeds (referred to as SFSM) was measured, and the number of seeds per single fruit (referred to as SFSN) was recorded. Subsequently, the hundred-grain weight (referred to as HWG) of the seeds was computed. The coefficient of variation among the six biological replicates for each sample tree was calculated, with the exclusion of outliers to maintain a coefficient of variation <0.1. The mean of the phenotypic data of the six fruits was then calculated as the final phenotypic data for this sample tree. Following this, the phenotypic data for SFSM, SFSN, and HWG were subjected to a Kolmogorov–Smirnov test to assess normality. Any outlier samples were removed to ensure the reliability of phenotypic data for subsequent GWAS analyses. Furthermore, a linear regression analysis was conducted to investigate the consistency of the same phenotype between the two years. Plant materials and genome resequencing Tissue samples were collected from young and healthy leaves and snap-frozen in liquid nitrogen, and the samples were ground into fine powder using a ball mill. Genomic DNA of yellow horn leaves was extracted using the CTAB method and assessed for quality. Genomic DNA sequences were fragmented using ultrasound and sequentially subjected to DNA end repair, poly-A addition at the 3′ end, phosphorylation at the 5′ end, ligation of junctions, PCR amplification, and magnetic bead purification to construct a qualified sequencing library. Subsequently, double-end sequencing (PE150) was performed using the Illumina platform (HiSeq 4000). Finally, the data were quality-checked using fastp (version 0.23.4) software [ 38 ]. SNP calling and quality control Four yellow horn reference genomes have been published: ZS4 [ 39 ], JGXP [ 40 ], WF18v1 [ 41 ], and WF18v2 [ 42 ]. Given the closer relationship between the sequenced samples and ZS4 yellow horn, we selected the ZS4 genome as our reference genome. This genome has 15 chromosomal pseudomolecules, in which 97.04% of the sequences are anchored, with a scaffold N50 size of 32.17 Mb, a contig N50 size of 1.04 Mb, and complete BUSCO value of 98.7%, and 24 672 protein-coding genes have been successfully annotated. These evaluation results confirm that the assembly and annotation of this genome exhibit a high quality level, rendering it suitable for use as a reference genome. Subsequently, Trimmomatic software (version 0.39) [ 43 ] was employed to eliminate adapters from the reads and filter out low-quality sequences. BWA software (version 0.7.17) [ 44 ] was utilized for mapping the reads to the reference genome. Picard software (version 3.0.0) was employed to sort the reads and eliminate PCR amplification-induced duplicates. GATK software (version 4.4.0.0) [ 45 ] was utilized for SNP and indel variant locus calling. Plink software (version 2) [ 46 ] was used to calculate the minimum allele frequency (MAF = 0.05), SNP genotyping deletion rate (geno = 0.1), and Hardy–Weinberg equilibrium filtering (hwe = 1e−06). Genome-wide association analysis and identification of candidate genes GWAS was conducted using GAPIT3 [ 47 ] to explore the associations between three phenotypes (SFSM, SFSN, and HWG) and a total of 2 164 863 high-quality variant loci in 2020 and 2022, respectively. Bayesian-information and Linkage-disequilibrium Iteratively Nested Keyway (BLINK) [ 48 ], Fixed and random model Circulating Probability Unification (FarmCPU) [ 49 ], Mixed Linear Model (MLM) [ 50 ], and General Linear Model (GLM) [ 51 ] were the models employed for the analysis. The BLINK model and the FarmCPU model are multi-bit point models, which outperform the single-point models MLM and GLM in terms of both statistical power and computational efficiency [ 47 ]. Additionally, to mitigate the impact of population stratification-induced false positives, PCA was conducted on the genotype matrix, and the first three principal components were included as covariates in the GWAS analysis. Meanwhile, kinship matrices constructed using the VanRaden method [ 52 ] were incorporated into the GWAS analysis to mitigate spurious associations resulting from sample kinship. The Bonferroni method was applied as a correction for multiple hypothesis testing. The association between SNP loci and phenotypes was evaluated using −log10(0.05/ k ) as the significance threshold, with k representing the total number of variant loci. Manhattan and QQ plots were created by using the qqman package for R software (version 4.2.0) to visualize GWAS results. In addition, the level of LD was determined by calculating the LD coefficient using PopLDdecay software (version 3.42) [ 53 ], and the LD decay distance was defined as the distance where the LD coefficient r 2 reached half of its maximum value. To identify candidate genes with high confidence, the significant SNP loci identified in the GWAS analysis were further filtered, retaining those that were called multiple times. Subsequently, a reference LD decay distance was utilized to identify candidate genes located within 20.75 kb upstream and downstream of the significant SNP loci. Finally, gene functions were queried using the NCBI database to determine the ultimate candidate genes. DNA sequencing and gene expression analysis To validate the accuracy of the GWAS analysis, specific primers were designed to clone the promoter region of the XsAP2 gene in nine germplasm samples with small HWG and seven samples with large HWG. Primer sequences can be found in Supplementary Data Table S5 . Subsequently, these germplasms were sequenced using the Sanger method to determine the genotypes of the Chr_24012613 and Chr_24013014 loci. In addition, to verify the effect of the genotypes of these two loci on the expression of the XsAP2 gene, the expression of the XsAP2 gene in these germplasms at the seed (DAP40) stage was determined by RT–qPCR. Subcellular localization The coding sequence of the XsAP2 gene without a stop codon was amplified using the T vector containing the target gene as a template and inserted into the pCAMBIA35S-eYFP expression vector. In addition, the H 2 B histone gene served as a nuclear marker. Agrobacterium tumefaciens (GV3101, Sangon Biotech, China) was separately transformed with three plasmids: 35S:XsAP2-eYFP , 35S:H 2 B-RFP , and 35S:eYFP . Subsequently, N. benthamiana leaves were instantaneously transformed. Following 12 h of dark treatment and 24 h of normal treatment, the yellow (eYFP) and red (mCherry) fluorescent signals were visualized using a laser confocal microscope (Zeiss LSM880 Airyscan FAST+ NLO). Phylogenetic tree construction Protein sequences of the Arabidopsis AP2/ERF gene family were downloaded from The Arabidopsis Information Resource website ( https://www.arabidopsis.org/ ). The protein sequences of the Arabidopsis AP2/ERF gene family and EVM0013598 gene were aligned using Clustal X software for multiple sequence alignment. Subsequently, MEGA11 software [ 54 ] was used to construct the phylogenetic tree employing the neighbor-joining method. The specific parameters used were: 1000 bootstrap, JTT + G model, and partial deletion (50%). Generation of transgenic A. thaliana The full coding sequence of the XsAP2 gene was amplified using yellow horn flower cDNA as a template and inserted into the pGEM-T vector (A1360, Promega, USA). Sequencing was performed using universal primers (SP6/T7) to ensure sequence accuracy. The correct coding sequence was cleaved from the T vector and ligated between the BglII and MluI restriction sites of the pCAMBIA35S-eYFP vector ( Supplementary Data Fig. S6 ). Subsequent sequencing was conducted to confirm the accuracy of the sequence. The CaMV35S:XsAP2 plasmid was introduced into WT Arabidopsis (Col-0) by A. tumefaciens (GV3101, Sangon Biotech, China) using the inflorescence infestation method. The putative F 1 generation transformant seedlings were identified using glufosinate ammonium (100 mg/l). Leaf tissue genomes of F 1 -generation transformants were extracted, amplified by PCR using universal primers specific to the pCAMBIA35S-eYPF vector, followed by sequencing to verify sequence integrity and positive identification. Seeds from each F 1 -generation transformed line were individually collected and transplanted. The F 2 -generation transformants were further screened with glufosinate ammonium salt (100 mg/l), and only lines exhibiting a 3:1 pattern of trait segregation was chosen. Subsequently, F 2 -generation transformants were cultivated under appropriate growth conditions, and F 3 -generation seeds were collected on a plant-by-plant basis. From the seeds harvested from each Arabidopsis plant, a subset of seeds (~100 seeds) was randomly selected for replanting. Seedlings were once again screened using glufosinate ammonium salt (100 mg/l). The identity numbers of Arabidopsis plants with fully surviving seedlings were recorded. These plants are considered homozygous and suitable for subsequent phenotypic observation. Additionally, WT and EV Arabidopsis plants were cultured under identical conditions as the control. All vector sequences and primer sequences are detailed in Supplementary Data Table S3 . Phenotypic observation of transgenic Arabidopsis seeds Five identified XsAP2 transgenic lines of A. thaliana were selected for observation. First, the total number of siliques per plant was counted. Subsequently, the 1st to 10th siliques (counting from the base of the main branch) on the main branch from each Arabidopsis plant were selected. The seeds inside each selected silique were photographed using orthographic projections at a consistent scale. The number and orthographic area of seeds were quantified using ImageJ software. The seed weight of 10 siliques from each Arabidopsis plant was measured using a high-precision electronic balance with an accuracy of 1/10 000. The weight of 100 seeds was determined by calculation. As controls to evaluate the impact of the XsAP2 gene, contemporaneous WT Arabidopsis plants and EV Arabidopsis plants were selected. Microscopic observation of Arabidopsis embryo Embryo specimens were prepared following the method established by previous researchers [ 10 , 12 ]. After specimen preparation, Arabidopsis embryos of different lines were photographed at the same magnification using a digital microscope (Motic, M17T-HD-P). The projected area of the embryos was subsequently calculated using ImageJ software. RT–qPCR analysis To examine the tissue-specific expression patterns of the XsAP2 gene in yellow horn, we collected tissue samples from seeds, roots, stems, leaves, and flowers at various developmental stages. Considering the potential impact of polymorphisms at the Chr10_24012613 and Chr10_24013014 loci, located in the promoter region, on the tissue-specific expression profile of the XsAP2 gene, we collected tissue samples from small-seed, normal-seed, and large-seed germplasm. To determine the correlation between phenotypic changes in seeds of transgenic A. thaliana and XsAP2 gene expression levels, seed tissues were collected from five transgenic lines, as well as from WT and EV of A. thaliana . Total RNA was extracted by the CTAB method from all the above collected tissue materials. First-strand cDNA was synthesized using the TransScript-Uni One-Step gDNA Removal and cDNA Synthesis SuperMix kit (AU311, Trans, China). Real-time quantitative PCR analysis was conducted using the TB Green Premix Ex Taq II kit (RR820Q, TaKaRa, Japan) on a LightCycler 480 system (three biological replicates and four technical replicates). The reaction volume was 10 μl and the cDNA concentration was 100 ng/μl. The β-actin genes of A. thaliana and yellow horn were used as internal controls. Relative gene expression was calculated using the 2 −∆∆Ct method. Gene primer sequences, internal reference primer sequences, and detailed reaction conditions are available in Supplementary Data Table S5 . Statistical analysis Differences between groups were analyzed using one-way ANOVA, followed by post hoc multiple comparisons using Tukey’s method. The correlation between groups was analyzed using Pearson’s correlation coefficient. All analyses were conducted using the R environment (version 4.2.0).
Results Phenotypic variation and correlation analysis for agronomic traits We assessed normality for each of the three phenotypes in both years. Based on Kolmogorov–Smirnov test results, only HWG in 2020 (HWG2020; P = .04) and HWG2022 ( P = .01) phenotypes demonstrated normal distribution ( Fig. 1c and f ). However, SFSN in 2020 (SFSN2020; P = .11), SFSN2022 ( P = .08), and SFSM in 2020 (SFSM2022; P = .09) did not conform to a normal distribution but exhibited characteristics of a skewed normal distribution ( Fig. 1b, e, and d ). Conversely, the distribution of SFSM2020 ( P = .54) phenotypic data exhibited a notable departure from normality ( Fig. 1a ). Furthermore, linear regression analyses conducted over various years for the three phenotypes revealed more robust regression results for the SFSM ( R = 0.71) and HWG ( R = 0.68) phenotypes, implying a high degree of genetic stability in these traits. Conversely, the SFSN ( R = 0.37) phenotype exhibited weaker regression, possibly due to its susceptibility to environmental influences ( Fig. 1g–i ). Resequencing of yellow horn germplasm resources and variant discovery The genomic DNA from 222 yellow horns was resequenced, generating a total of 1690.7 Gb of raw data, with an average of 7.6 Gb per sample, 1675.5 Gb of filtered cleaned data, an average of 7.547 Gb of cleaned data per sample, an average Q30 ratio of 93.8%, and an average GC content of 39.3% ( Supplementary Data Table S1 ). The ZS4 genome was employed as the reference to align the resequenced data from 222 samples, facilitating the identification of genome-wide variant sites. Following rigorous quality control and screening, we obtained a total of 2 164 863 high-quality variant sites, comprising 1 926 312 SNP sites and 238 551 indel sites ( Fig. 2a and b ). The distribution of these variants across the genome exhibited heterogeneity, with the majority residing in non-coding regions (983 930, 45.45%). Additionally, 168 210 (7.77%) of the variant loci within coding regions were situated in exon regions, and 260 650 (12.04%) were in intron regions. Notably, ~97 394 (57.9%) of the SNPs within exon variant loci led to non-synonymous mutations ( Supplementary Data Table S2 ). These high-quality loci hold substantial significance for investigating the genetic underpinnings of yellow horn traits. Principal component analysis and linkage disequilibrium decay distance To further investigate the genetic basis of these three agronomic traits, we initially conducted principal component analysis (PCA) on the genotype dataset to explore potential population stratification effects. The results indicated that the Bayesian information criterion (BIC) achieved the highest value when three principal components were included. This suggests that, for subsequent genome-wide association study (GWAS) analysis, correcting for population stratification by employing the first three principal components is the most suitable approach to mitigating the impact on the results. The first three principal components of the PCA explained 4.99, 1.73, and 1.70% of the phenotypic variation, respectively. Furthermore, the subsequent fourth to tenth principal components accounted for <1.5% of the total variance ( Fig. 3a ). To visually illustrate the influence of the first three principal components on the population, a 3D plot of the PCA was generated ( Fig. 3b ). The results indicated that all the sample points clustered together without distinct separation, suggesting the absence of significant population stratification among the studied samples. This finding aligns with the fact that all samples originate from a common germplasm resource base. To determine the screening range of candidate genes, linkage disequilibrium (LD) analysis was performed using the genotype dataset ( Fig. 3c ). The LD decay distance was set to half of the maximum LD distance. The results revealed an LD distance of 20.75 kb for yellow horn. Genome-wide association studies for three agronomic traits GWAS analysis was conducted for each agronomic trait using Bayesian-information and Linkage-disequilibrium Iteratively Nested Keyway (BLINK), Fixed and random model Circulating Probability Unification (FarmCPU), Mixed Linear Model (MLM), and General Linear Model (GLM). To control for potential false positives arising from group stratification effects, the first three principal components and the kinship matrix were included as covariates ( Fig. 3b , Supplementary Data Fig. S1 ). The results of the GWAS analysis are presented in Manhattan plots ( Figs 4 and 5 , Supplementary Data Fig. S2 ) and QQ plots ( Supplementary Data Fig. S3 ). A significance threshold of 7.64 [−log10 (0.05/2164863)] was applied to identify significant SNP loci associated with the traits. In the case of three traits, the HWG and SFSM traits exhibited associations with a considerable number of significant SNP loci, whereas the SFSN trait did not exhibit associations with any significant loci. Notably, the significant loci associated with the HWG and SFSM traits were located in the same chromosomal region, and a substantial proportion of these loci were redundant. In the case of the two years, the association results for the three traits in 2022 surpassed those from 2020. HWG2022 exhibited associations with 6 more loci than HWG2020, and SFSM2022 exhibited associations with 173 more loci than SFSM2020. Regrettably, both SFSN2022 and SFSN2020 did not exhibit associations with any significant loci, possibly due to the limited genetic influence on seed quantity in yellow horn single fruit. Among the four association models, MLM and GLM produced similar association results, as did the BLINK and FarmCPU models. However, the MLM and GLM models yielded a significantly higher number of associated loci than the BLINK and FarmCPU models. Interestingly, while the BLINK and FarmCPU models had fewer associated loci, they produced more statistically significant results for core loci. As the SFSN trait did not exhibit associations with significant loci and a substantial overlap was observed in the associated loci between the SFSM and HWG traits, we merged all association results and removed redundancy, resulting in 399 significant SNP loci ( Supplementary Data Table S4 ). Leveraging the genomic annotation of yellow horn and LD decay distance, we probed for candidate genes within a 20.75-kb range upstream and downstream of the significant SNP loci, resulting in the discovery of 72 candidate genes. Notably, the relative positional distribution of the significant SNP loci and candidate genes demonstrated marked imbalance: 68% of the significant SNP loci were located in the upstream or downstream regions of genes, 10% within gene exons, and 10% within introns, and 12% of the significant SNP loci had no corresponding identified candidate gene. Subsequently, we conducted a BLAST analysis using the NCBI online server for the protein sequences of all candidate genes, revealing their predominant encoding of protein kinases, calcium-binding proteins, hormone response factors, and transcription factors ( Supplementary Data Table S4 ). These findings suggest the critical involvement of regulatory factors in determining seed size and single fruit yield in yellow horn. Identification of candidate genes involved in hundred-grain weight and single-fruit seed mass Among the numerous significant SNP loci, the Chr10_24013104 and Chr10_24012613 loci have garnered substantial interest due to their consistent association across multiple GWAS analyses. Our investigation involved 16 GWAS analyses targeting two agronomic traits, namely, hundred-grain weight (HWG2022 and HWG2020) and single-fruit seed mass (SFSM2022 and SFSM2020), utilizing four distinct association models (BLINK, FarmCPU, MLM, and GLM). In 16 GWAS analyses, the Chr10_24013014 locus was identified as a significant site in 12 instances, with P -values ranging from 2.01E−09 to 2.12E−28. Similarly, the Chr10_24012613 locus exhibited significance in six GWAS analyses, with P -values ranging from 2.50E−09 to 1.99E−18 ( Supplementary Data Table S4 ). Notably, all P -values for these two loci were well below the threshold for significant association (2.31E−08). These results substantiate the considerable impact of these loci on HWG and SFSM traits in the yellow horn. Further scrutiny unveiled that the Chr10_24013014 locus resided 870 bp upstream of the EVM0013598 gene, whereas the Chr10_240126131 locus was situated within the 5′ untranslated region of the same gene, 469 bp upstream from the gene start ( Fig. 6a ). Examining the phenotype data corresponding to different genotypes at these loci, we found that samples with the CC genotype at the Chr10_24013014 locus exhibited significantly higher seed weight and single-fruit seed quality compared with those with −C and −− genotypes ( Fig. 6c and e ). Similarly, samples with the CC genotype at the Chr10_24012613 locus displayed significantly higher values than those with GC and GG genotypes ( Fig. 6b and d ). To explore the connection between the genotypes of the Chr10_24013014 and Chr10_24012613 loci and yellow horn seed size as well as single-fruit yield, we cloned the promoter region of the EVM0013598 gene (−370 to −952 bp) from nine large germplasms and seven small seed germplasms. The outcomes revealed that the genotypes of the large seed germplasms at Chr10_24013014 and Chr10_24012613 were CC and CC, while the small seed germplasms at these loci were −C and GC ( Supplementary Data Fig. S4 ). These findings align with the results of the GWAS analysis ( Fig. 6b and c ). Furthermore, we measured the relative expression levels of the EVM0013598 gene during the seed (DAP40, 40 days after pollination) stage ( Fig. 6f ). The results indicated a significantly lower relative expression level of the EVM0013598 gene in the large seed germplasms compared with the small seed germplasms ( P = .005). Simultaneously, a correlation analysis was performed between the relative expression levels of the EVM0013598 gene in these 16 germplasms and the phenotypic data (HWG and SFSM). The results demonstrated a strong negative correlation between the relative expression level of the EVM0013598 gene and the HWG trait ( R = −0.72) as well as the SFSM trait ( R = −0.71). Combining these results, we infer that polymorphism in the Chr_24013014 and Chr10_24012613 loci affects the transcription activity of the EVM0013598 gene, thereby influencing yellow horn seed size and single-fruit yield. XsAP2 gene is a homologous gene of APETALA2 The protein sequence of the EVM0013598 gene was subjected to a BLAST analysis, confirming its affiliation with the AP2/ERF transcription factor family. To further understand the function of the EVM0013598 gene, a phylogenetic tree was constructed with this gene and the AP2/ERF gene family in A. thaliana . The results showed that this gene belongs to the AP2 subfamily of the AP2/ERF transcription factor gene family. Further analysis found that this gene has the highest similarity (64%) with the Arabidopsis APETALA2 ( AT4G36920 ) gene ( Supplementary Data Fig. S5 ). To facilitate subsequent research, we have aptly renamed this gene as XsAP2 . Subcellular localization and tissue expression pattern of the XsAP2 gene The subcellular location of XsAP2 was determined by fusing it with enhanced yellow fluorescent protein (eYFP) and inducing the expression of the fusion gene in Nicotiana benthamiana leaf epidermal cells, using the CaMV35S promoter. In the absence of the target gene sequence, eYFP fluorescent protein was expressed throughout N. benthamiana leaf epidermal cells. However, when the XsAP2 gene was fused with eYFP, eYFP was expressed only in specific locations within the cell ( Fig. 7b ). In addition, we observed expression of the histone gene ( H2B ) fused with red fluorescent protein (RFP) at the same location ( Fig. 7b ); H2B-type histones are recognized as nucleus-localized proteins. The above results suggest that XsAP2 is localized in the nucleus, consistent with the localization pattern of transcription factors. The relative expression levels of the XsAP2 gene in various tissues of yellow horn, including roots, stems, leaves, flowers, and seeds, were analyzed through RT–qPCR . Given the polymorphism of the Chr10_24013014 and Chr10_24012613 loci potentially affecting the expression of the EVM0013598 gene, an analysis was conducted on small-seed germplasm, normal-seed germplasm, and large-seed germplasm. It is important to note that the genotypes for small-seed germplasm were −C Chr10_24013014 and GC Chr24012613 while the genotypes for large seed germplasm were CC Chr10_24013014 and CC Chr24012613 , with other genotypes representing normal seed germplasm. The results revealed that the XsAP2 gene exhibited extensive expression in various tissues and similar expression profiles among different types of germplasm ( Fig. 7a ). Interestingly, the XsAP2 gene showed a pattern of increased and then decreased expression at different developmental stages of seed, with high expression levels at days 40 and 50 post-pollination. This pattern of expression mirrors the growth pattern of yellow horn seeds, which exhibit rapid growth from 25 to 50 days after pollination. Seed sizes practically stopped changing by day 60 following pollination, and the seeds transitioned from the growth stage to the maturity stage. The analysis of tissue expression profiles and subcellular localization led us to hypothesize that XsAP2 gene expression is regulated at both spatial and temporal levels. Additionally, the gene could potentially affect the expression of specific genes, contributing to seed growth and development. Overexpression of the XsAP2 gene reduces seed size and yield To explore the influence of XsAP2 on seed size and yield, we engineered a CaMV35S:XsAP2 overexpression vector and incorporated it into A. thaliana with a Col-0 genetic background. After multigenerational screening, five pure transgenic lines (OE1–OE5) were derived. Prior to the phenotypic observation of these lines, we examined XsAP2 gene expression levels. Compared with the WT (Col-0) and empty vector (EV) plants, the transgenic lines exhibited a several thousand-fold increase in XsAP2 expression, affirming successful overexpression of the gene ( Fig. 8c ). Examination of Arabidopsis seeds revealed variations in seed size across the plant. To evaluate the effect of the XsAP2 gene on seeds more precisely, we observed siliques from the same position on the main branch. Compared with WT and EV, the transgenic lines produced smaller and lighter seeds ( Fig. 8a ). Specifically, seed area was reduced by 12.9–16.2% and weight per 100 seeds was lightened by 21.2–24.7% ( Fig. 8e and f ). In mature Arabidopsis seeds, the embryo occupies the majority of the volume. We further inquired about the impact of the XsAP2 gene on seed embryos. The results showed that the area of the seed embryo in the five transgenic Arabidopsis lines was reduced by 19.7–22.1% compared with WT and EV ( Fig. 8b and h ). Despite the variation in the reduction ratios of seed area, embryo area, and seed weight in transgenic Arabidopsis , considering that seed area and embryo area do not take the thickness into account, we believe these factors do not conflict. Rather, they collectively suggest that overexpression of the XsAP2 gene reduces the size of Arabidopsis seeds. In general, seed size and quantity maintain a balance, whereby a reduction in seed size could potentially lead to a surge in seed quantity. Further investigation into the influence of the XsAP2 gene on the number of seeds per silique and total siliques per plant in Arabidopsis was conducted. Unexpectedly, compared with WT and EV, the five transgenic lines demonstrated a reduction of 4.1–12.1% in seeds per silique, and a 9.7–11.9% decrease in the total number of siliques per plant ( Fig. 8d and g ). These findings suggest that overexpression of the XsAP2 gene not only results in smaller seeds but also does not facilitate a compensatory augmentation in seed quantity.
Discussion Impact of different models on genome-wide association study analysis Since the commencement of the first GWAS [ 27 ], it has developed into an effective tool for the study of the genetic basis of complex traits and has been extensively used in the identification of economically valuable agronomic traits [ 28 , 29 ]. GWAS analyses frequently yield false negatives and false positives, attributable to interspecies variability and the complexity of genetic trait relationships [ 30 ]. Consequently, selecting an appropriate association model is crucial [ 31 ]. We employed four types of association models to conduct the GWAS analysis in this study and found significant variation in the results generated by these models. The multi-locus models (BLINK and FarmCPU) identified only a few significant loci, while the single-locus models (GLM and MLM) identified numerous significant loci ( Supplementary Data Table S4 ). Regrettably, the results from GLM and MLM mostly presented false associations, with most identified candidate genes unrelated to the studied traits. This could be attributed to overfitting of the model, with similar outcomes present in other studies as well [ 32 ]. Despite the few loci identified, the FarmCPU and BLINK models had success in identifying the dependable candidate gene XsAP2 ( Figs 4 and 5 , Supplementary Data Fig. S2 ). These results suggest that the FarmCPU and BLINK models are more suitable for GWAS studies on seed size in yellow horn. This finding may also have implications for the study of other traits in yellow horn through GWAS. Polymorphisms at the Chr10_24013014 and Chr_24012613 loci affect XsAP2 gene expression In this study, based on the resequencing of 222 yellow horn germplasms, we aimed to dissect the molecular mechanisms underlying yellow horn seed yield. In the GWAS analysis of three key factors affecting yellow horn seed yield (HWG, SFSM, and SFSN), except for SFSN, which did not show significant associations with loci, both HWG and SFSM were associated with several significant loci. The Chr10_24013014 and Chr10_24012613 loci on chromosome 10 drew our attention due to their repeated associations ( Supplementary Data Table S4 ). Considering the LD genetic distance, we identified candidate gene XsAP2 downstream of these two loci. The Chr10_24013014 and Chr10_24012613 loci are in the XsAP2 gene promoter region, leading us to hypothesize that mutations at these loci may affect the expression activity of the XsAP2 gene. To test this hypothesis, we selected nine large seed germplasms and seven small seed germplasms and determined their genotypes at the Chr10_24013014 and Chr10_24012613 loci through Sanger sequencing. The results showed that the genotypes of the large-seed germplasm at these loci were CC and CC, while the small-seed germplasm had genotypes −C and GC at these loci ( Supplementary Data Fig. S4 ). These results were consistent with the GWAS analysis. The tissue expression profile of XsAP2 indicated a high expression level during seed development (DAP40). Therefore, we quantified XsAP2 gene expression at this stage using RT–qPCR . The results revealed that the expression of the XsAP2 gene was significantly lower in the large-seed germplasm compared with the small-seed germplasm ( Fig. 6f ). We further conducted a correlation analysis between the relative expression of the XsAP2 gene and phenotypic data, demonstrating a strong negative correlation between the relative expression of the XsAP2 gene and seed weight (HWG, R = −0.72) and single-fruit seed yield (SFSM, R = −0.71) ( Fig. 6g ). These findings to some extent confirm that the Chr10_24013014 and Chr10_24012613 loci can influence the expression of the XsAP2 gene, subsequently affecting yellow horn seed size and single-fruit yield. The XsAP2 gene functions to regulate seed size and yield The AP2 transcription factors serve multifarious roles in plant growth and development [ 33 , 34 ]. This study identifies an AP2 gene impacting seed size and single-fruit yield through GWAS analysis in yellow horn and confirms its biological role. Tissue expression analysis revealed that the XsAP2 gene was expressed in various tissues of yellow horn, with higher expression in flowers and seeds, particularly during the rapid seed growth period ( Fig. 7a ). This suggests that the XsAP2 gene is involved in flower and seed development. To further explore the biological function of XsAP2 , we overexpressed the XsAP2 gene in WT Arabidopsis . Compared with WT and EV, the XsAP2 transgenic lines showed different reductions in seed area, seed HWG, and embryo area ( Figure 8e, f, and h ). The results indicate that XsAP2 has a negative regulatory effect on the size of the seed. This function exhibits similarity to the Arabidopsis AP2 gene [ 12 , 13 , 35 ]. Additionally, a slight reduction in the number of siliques per plant and the number of seeds per silique occurred in transgenic Arabidopsis compared with WT and EV ( Fig. 8d and g ). The number of fruits is correlated with the number of flowers. The function of the Arabidopsis AP2 gene is best known as that of a class A gene in the ABCED model of flower development. It antagonizes class C genes and controls the morphogenesis of floral organs [ 36 , 37 ]. In addition, the XsAP2 gene is a homolog of the Arabidopsis AP2 gene ( Supplementary Data Fig. S5 ). The above results indicated that overexpression of the XsAP2 gene negatively regulated seed size and yield in A. thaliana , proving that the XsAP2 gene has the function of regulating seed size and yield. In addition, both HWG and SFSM of yellow horn had a strong correlation with the relative expression of the XsAP2 gene ( Fig. 6g ). Therefore, we hypothesized that the XsAP2 gene also had some negative regulatory effects on seed size and single-fruit yield of yellow horn.
Conclusions Our study has elucidated that mutations at the Chr10_24013014 and Chr10_24012613 loci have the capacity to influence the expression activity of the XsAP2 gene. Furthermore, the XsAP2 gene exerts a certain degree of negative regulation on yellow horn seed size and single-fruit yield. These findings unveil the genetic regulatory mechanisms governing yellow horn seed size and yield, laying the groundwork for the development of molecular markers for early identification of high-yielding yellow horn plants and providing valuable insights for molecular breeding aimed at achieving higher yellow horn yields.
Abstract Yellow horn ( Xanthoceras sorbifolium Bunge) is a woody oilseed tree species whose seed oil is rich in unsaturated fatty acids and rare neuronic acids, and can be used as a high-grade edible oil or as a feedstock for biodiesel production. However, the genetic mechanisms related to seed yield in yellow horn are not well elucidated. This study identified 2 164 863 SNP loci based on 222 genome-wide resequencing data of yellow horn germplasm. We conducted genome-wide association study (GWAS) analysis on three core traits (hundred-grain weight, single-fruit seed mass, and single-fruit seed number) that influence seed yield for the years 2022 and 2020, and identified 399 significant SNP loci. Among these loci, the Chr10_24013014 and Chr10_24012613 loci caught our attention due to their consistent associations across multiple analyses. Through Sanger sequencing, we validated the genotypes of these two loci across 16 germplasms, confirming their consistency with the GWAS analysis results. Downstream of these two significant loci, we identified a candidate gene encoding an AP2 transcription factor protein, which we named XsAP2 . RT–qPCR analysis revealed high expression of the XsAP2 gene in seeds, and a significant negative correlation between its expression levels and seed hundred-grain weight, as well as single-fruit seed mass, suggesting its potential role in the normal seed development process. Transgenic Arabidopsis lines with the overexpressed XsAP2 gene exhibited varying degrees of reduction in seed size, number of seeds per silique, and number of siliques per plant compared with wild-type Arabidopsis . Combining these results, we hypothesize that the XsAP2 gene may have a negative regulatory effect on seed yield of yellow horn. These results provide a reference for the molecular breeding of high-yielding yellow horn.
Supplementary Material
Acknowledgements This work was financially supported by the Youth Top Talent Project of the Ten Thousand Talents Program of the State and Natural Science Foundation of China (31870594). Author contributions Z.Z., H.Y., and L.W. designed the research. Z.Z. performed data collection and analysis and the field experiments, and wrote the manuscript. C.L. and W.Z. provided experimental materials and supplemented the experimental protocol. Y.Y. and Z.Z. completed the phenotyping. Q.B. provided resequencing data. H.Y. and L.W. helped to solve the experimental problems, reviewed the manuscript, and provided constructive comments. Data availability The data underlying this article are available in the NCBI database at https://www.ncbi.nlm.nih.gov/sra , and can be accessed with PRJNA1031336. The detailed run code and parameters for the GWAS analysis are available on GitHub ( https://github.com/ziquanzhao/Python_GX/tree/master/Python_apply/EasyGWAS ). Conflict of interest The authors declare no conflicts of interest. Supplementary data Supplementary data is available at Horticulture Research online.
CC BY
no
2024-01-16 23:43:50
Hortic Res. 2023 Nov 22; 11(1):uhad243
oa_package/04/f3/PMC10788774.tar.gz
PMC10788775
38225981
Introduction Carnations ( Dianthus caryophyllus L.) belong to the family Caryophyllaceae and are a major ornamental plant species found throughout the world. Due to their colorful flowers and abundant forms, they are widely used as cut flowers and potted and yard flowers, and in the landscaping of flower beds that are loved by people all over the world. Although there are already many varieties of carnation on the market, there is a strong demand from consumers and growers for new cultivars with specific characteristics. Consumers expect cut flowers to have a range of colors, rich fragrance, and long vase life. Growers expect them to be disease resistant and to bloom continuously. So far, growers have been using interspecific and intraspecific hybridization strategies for creating cultivars with diversity and quality of ornamental traits [ 1 , 2 ]. In recent years, it has been reported that a high-quality genome helps to increase the efficiency of breeding and improvement of quality in plants such as apple [ 3 ] and coconut [ 4 ]. Assembling a high-quality genome is a common way to gain a better understanding of a species. Researchers today can easily obtain chromosomal-level assemblies using Pacific Biosciences (PacBio) single-molecule real-time (SMRT), Oxford Nanopore Technologies (ONT), and high-throughput chromatin conformation capture (Hi-C) sequencing technology. However, for several years [ 5 ] there have been gaps in the genome because of the weakness of the assembly algorithms, sequencing methods, and so on [ 6 ]. Most of these gaps exist in tandem repeats and segmental duplications that are difficult to resolve [ 7 ]. In the first reported telomere-to-telomere (T2T) human genome, the gaps occurred in regions where there were tandem and complex repeats [ 8 ]. These expanded repeat contents and repeat-mediated structural rearrangements provide insight into the evolution of the species and the chromosome structure [ 9 ]. The short reads are known to produce low-quality de novo genome assemblies, because they cannot span long and complex regions [ 10 ]. For this reason, there were many gaps in draft genomes, which reduced the number of genes and overlooked the ‘dark matter’ regions in the genome assembly. The improvements in long-read sequencing technologies, particularly third-generation sequencing technology and assembly algorithms, enabled T2T assembled genomes to be achieved. A complete T2T genome containing the full genome information of a species, could be seen as a final goal of a genome assembly. It would avoid mapping errors and improve the precision of calling variation; identify genes and genetic information that has been lost; provide more accurate haplotype genome information; and reveal the evolutionary history of centromeres and telomeres [ 11 ]. To date, several genomes have been reported as being gap-free or T2T, such as Arabidopsis [ 12 ], watermelon [ 13 ], rice [ 14 ], kiwifruit [ 15 ], bitter melon [ 16 ], Rhodomyrtus tomentosa [ 17 ], and grapes [ 18 ]. However, no T2T genome has been published for carnation. The previously published chromosome-level D. caryophyllus ‘Scarlet Queen’ (SQ) genome retains several gaps [ 19 ] ( Table 1 ). Gene expression is one of the key processes of trait formation. Recently many studies found that genetic variation and unbalanced expression of alleles are responsible for trait diversity. An imbalance in mRNA abundance between alleles has been referred to as ‘allele-specific expression’ (ASE). For example, in apple, MYB110a encodes a transcription factor regulating anthocyanin biosynthesis, and a transposable element (TE) insertion in the allele MYB110a results in ASE [ 20 ]. In strawberries, researchers found that specific TE insertion into an allele of TFL caused ASE, resulting in a change from flowering once to continuous flowering [ 21 ]. Similar findings have been found in other species, such as Arabidopsis [ 22–24 ] and barley [ 25 ]. Genome architecture is the arrangement of functional elements within the genome [ 26 ] and can be represented in a linear fashion. It can play a pivotal role in gene regulation [ 27 ]. For example, introns have been shown to have multiple effects on expression regulation [ 28 , 29 ]. The widely distributed TEs among eukaryotic species contribute to genome architecture, and undergo independent expansion [ 30 ]. TEs have been reported to be able to regulate the expression of genes in regulatory networks [ 30 ], enhancers [ 31 , 32 ], transcription factor binding sites [ 33 ], insulator sequences [ 34 ], and repressive elements [ 35 ]. Research in apple [ 3 ] demonstrated that TE insertion can enhance gene expression and alter the phenotype. In general, intact long terminal repeats (LTRs) harbor promoter and terminator sequences [ 36 ], and could be identified more in the completed genome assemblies [ 37 ]. TEs have been reported that are ubiquitous in the D. caryophyllus genome [ 19 , 38 ], but remain less discussed. In addition, the lack of complete and accurate genomic data has hampered our studies of genome architecture. By combining long- and short-read sequencing data with state-of-the-art assembly algorithms, we generated haplotype-resolved T2T genome assemblies of D. caryophyllus ‘Baltico’. Based on the gap-free genome, we bridged the gaps in all telomeres and analyzed the ‘dark regions’ within the telomeres and centromeres. We speculate that D. caryophyllus may have a unique centromere region. Based on the comprehensive genome architectures of the haplotypes, we investigated the correlation between genome architecture and gene expression. We analyzed the expression patterns of allelic genes and found that 29.28–33.94% of them showed ASE in different tissues between the two haplotypes. We also found specific genome architecture between the haplotypes which could contribute to the ASE. We speculate that the gene, coding sequence (CDS), and intron lengths and exon numbers were correlated with gene expression, and that TEs insertions are widely characterized as repressive elements involved in gene regulatory networks in D. caryophyllus .
Materials and methods Plant materials and genome sequencing The carnation variety ‘Baltico’ (2 n = 30) used in this study was collected from the experimental field of the Comprehensive Experimental Base of Shenzhen Institute of Agricultural Genomics, Chinese Academy of Agricultural Sciences (Shenzhen, Guangdong, China). For HiFi data, young leaves were collected to extract genomic DNA using the cetyltrimethylammonium bromide (CTAB) extraction method. Subsequently, a PCR-free SMRT library with an insert size of 15 kb was constructed and sequenced using the PacBio Sequel II platform. For each UL nanopore library, ~8–10 μg of genomic DNA was size-selected (>100 kb) using the SageHLS HMW library system (Sage Science, USA). The DNA was then processed using the Ligation Sequencing 1D Kit (SQK-LSK109, Oxford Nanopore Technologies, UK) following the manufacturer’s instructions. Approximately 800 ng of DNA libraries were constructed and sequenced on a Promethion (Oxford Nanopore Technologies, UK) at the Genome Center of Grandomics (Wuhan, China). For Hi-C data, freshly harvested leaves were lysed, and DpnII endonuclease was used to digest fixed chromatin. The DNA’s 5′ overhangs were recovered using biotin-labeled nucleotides and the resulting blunt ends were ligated together using DNA ligase. Proteins were removed with protease to release the DNA molecules from the crosslinks. The purified DNA was then sheared into fragments ranging from 300 to 600 bp. Finally, libraries were quantified and sequenced using the MGI-2000 platform. RNA sequences were obtained from pooled stems, leaves, and flowers of carnation ‘Baltico’ and used for genome structure annotation; young leaves, flowers, and roots were used for ASE analysis. The extracted RNA was used to construct cDNA libraries, which was sequenced on the Illumina HiSeq X platform to generate 150-bp paired-end reads. Genome assembly and evaluation The first assembled contig of the genome was accomplished by using hifiasm [ 67 ] (0.19.2-r560), combined with the HiFi reads, UL reads (>100 000 bp), and Hi-C data. Initial assembly results were further filtered by removing the organellar contigs by comparing the mitochondrion genome, chloroplast genome, and nucleotide collection database (nt). 3D-DNA [ 68 ] and JUICER [ 69 ] were used to sort and orient the contigs into pseudochromosomes and manual curation was performed with Juicebox Assembly Tools (JBAT). Gaps and missed telomeres were further filled using error-corrected ONT data using TGS-gapcloser [ 70 ] and manually checked by blastn [ 71 ]. Pilon [ 72 ] was used to polish ONT reads. KAT [ 73 ] and BUSCO v5.2.2 [ 74 ] were used to evaluate the quality of the assembled genome by using the databases ‘eudicots_odb10’ and ‘embryophyta_odb10’. The switch errors were evaluated by calc_switchErr ( https://github.com/tangerzhang/calc_switchErr ). The QV was evaluated by yak ( https://github.com/lh3/yak ) using the short sequencing data. Furthermore, the quality of the two haplotypes was assessed using the 72 L genetic maps of the published carnation genome [ 39 ] using ALLMAPS ( https://github.com/allmaps/allmaps ). Tidk ( https://github.com/tolkit/telomeric-identifier ) was used to identify the position of the telomere in the T2T assembly. Tandem repeats finder (TRF) [ 75 ] with ‘1 1 2 80 5 200 2000 -d -h -l 1’, ModDotPlot ( https://github.com/marbl/ModDotPlot ) and srf ( https://github.com/lh3/srf ) were used to identify the candidate centromere location of each chromosome. Genome annotation For protein-coding gene prediction, we used homology, de novo and transcriptome prediction. Homologue proteins from eight plant genomes, including Arabidopsis thaliana , Oryza sativa , Rosa chinensis , Vitis vinifera , Carica papaya , D. caryophyllus _draft_r1.0, Solanum lycopersicum , and Beta vulgaris , and the established carnation genome annotation [ 19 ], were selected to align to the ‘Baltico’ genome assembly by exonerate software v2.2.0 and AUGUSTUS v3.3.3. For transcriptome prediction, RNA-seq data from stems, leaves, and flowers were mapped onto the ‘Baltico’ genome using HISAT2 v2.1.0 [ 76 ]. In addition, Trinity [ 77 ] was used to assemble the RNA-seq data, and the result was used to create several pseudo-unigenes. These pseudo-unigenes were mapped onto the ‘Baltico’ genome and gene structures were predicted by PASA v2.5.2 [ 78 ]. For de novo prediction, AUGUSTUS v3.3.3 [ 79 ], SNAP v2013-02-16 [ 80 ], and GlimmerHMM v3.0.4 [ 81 ] were used to predict coding regions. Gene model evidence from the above programs was combined by EvidenceModeler [ 78 ] to get the final non-redundant set of gene structures. The repeat contents were identified using EDTA v2.1.0 [ 82 ]. The NLRs were annotated by the NLR-Annotator [ 83 ]. Analysis between haplotypes The genomes were aligned by minimap2 [ 84 ]; variations were further identified using SYRI [ 85 ] and were plotted by GenomeSyn [ 86 ], the standard for determining whether the structure variations (SV) (indels and HDRs, >40 bp) may be mediated by intact TE insertions according to the description in Supplementary Data Fig. S23A . Homologous regions and syntenic blocks between two haplotypes of carnation were constructed through the alignment of CDS sequences using MCScanX [ 87 ]. Allelic genes were identified based on the following criteria: (i) paired regions must be located on homologous haplotypes within syntenic blocks; (ii) a gene and its best homologous gene on another haplotype should be matched; and (iii) a minimum of one SNP variation (insertion, deletion, and variation) is required within the CDS sequence alignment. Genes meeting these criteria were considered as alleles. When genes within syntenic blocks between the two haplotypes shared identical CDSs, they were designated as a ‘single allele’. To perform the ASE analysis, blooming flowers, young leaves, and roots were isolated for RNA sequencing with three biological duplicates. The raw RNA reads were trimmed and mapped onto the ‘Baltico’ genome by HISAT2, and reads that uniquely mapped were kept for analysis. The count was obtained by HTSeq [ 88 ] with the following parameters: ‘-f bam -r name -t gene -i ID -a 0 -s no -m union’. DESeq2 [ 89 ] was used to identify differentially expressed genes (alleles showing unbalanced expression) ( P < 0.05 and |log2FoldChange| > 1). In addition, the expression of alleles was divided into two classes: biallelic expression genes, in which the expression of alleles does not differ between the two haplotypes; and alleles showing unbalanced expression in which there was differential expression. Alleles in the unbalanced expression category were divided into three classes following classification methods reported previously [ 61 ]. The three classes were monoallelic expression with Hap1; monoallelic expression with Hap2; and increased expression of one allele. Among these alleles, the partitioning criteria were set as follows. If the count was less than one in one haplotype and greater than one in the other haplotype, it was considered as monoallelic expression (Hap1 or Hap2). The other unbalanced expression alleles were considered as snowing increased expression of one allele. The number of fragments per kilobase of exon model per million mapped fragments (FPKM) was calculated using StringTie v2.1.6 [ 90 ] (parameter -e). GO enrichment was visualized using the hiplot online site ( https://hiplot.cn ). Comparison of expression levels between genes with different features The expression level was obtained from flower, leaf, and root. Intact TEs annotated by EDTA were used to analyze the correlation with gene expression. The regions where the intact TEs inserted into the 5-kb flanking gene regions, 2-kb flanking gene regions, gene region, exon region, and intron region were considered as TEs affecting candidate genes (details can be found in Supplementary Data Fig. S23B ). Each gene whose CDS length, intron length, and total gene length was longer or shorter than the median value was placed in the longer or shorter group, respectively. Each gene that exhibited a value of 0 for the average FPKM (flower, leaf, and root) would be identified as a non-expressed gene. The t -test and ANOVA were used to analyze for significant differences and a P -value of <0.05 was considered a significant difference between the counterparts. Genome decomposition analysis GDA v1.0 [ 91 ] was used to perform the analysis. A window size of 10 kb was used to extract the sequence features using default parameters. We added features containing RNA mapping depth, the repeat contents identified by EDTA, and the genome annotation prediction results, and set the telomeric sequence to ‘ Arabidopsis_thaliana ’. A total of 27 features were used for dimensionality reduction and clustering by Python UMAP [ 92 ] and hdbscan [ 93 ] libraries. According to the Kolmogorov–Smirnov test, the P -value <1e−20 was taken to indicate a significant difference.
Results The telomere-to-telomere genome assembly and annotation of D. caryophyllus ‘Baltico’ Total sequencing data with 73.45 Gb of high-fidelity (HiFi) data, 57.77 Gb of ultra-long (UL) ONT data, 44.64 Gb of Hi-C data, and 32.3 Gb of short-read data were used for genome assembly. The HiFi data used for genome assembly had an average length of 17 622.2 bp and an average base quality score of 31.1 ( Supplementary Data Fig. S1A and B ). The UL ONT data used for genome assembly had an average length of 38 018.9 bp and an average base quality score of 11.06 ( Supplementary Data Fig. S1C and D ). By filtering read lengths shorter than 100 kb, we got a total of 4.04 Gb data with an average length of 124 546.5 bp and average base quality score of 11.04 ( Supplementary Data Fig. S1E and F ). Through uniting Graphical Fragment Assembly (GFA) generated by hifiasm, we obtained a high continuous genome graph ( Supplementary Data Fig. S2 ). The primary assembly of ‘Baltico’ revealed the size to be 563 683 187 and 567 007 489 bp for two different haplotypes, hereafter identified as Hap1 and Hap2, respectively, and each assembly contained a total of 29 contigs ( Table 1 ). To evaluate the genome quality precisely, the two different Benchmarking Universal Single-Copy Orthologs (BUSCO) databases, eudicots_odb10 (EU) and embryophyta_odb10 (EM), were used. The primary assembly was of high quality, as revealed by BUSCO evaluation results, for both haplotypes. Complete scores were 97.9% (EM) and 93.8% (EU) for Hap1, 97.2% (EM) and 93.2% (EU) for Hap2 ( Supplementary Data Fig. S3A ). In addition, low duplication scores of 4.6% (EM) and 5.8% (EU) for Hap1 and 4% (EM) and 5.5% (EU) for Hap2 were revealed. The k -mer spectrum plot also reflected the high-quality and haplotype-resolved assembly results ( Supplementary Data Fig. S4 ). Using the Hi-C data, we successfully oriented the contigs into 15 pseudochromosomes, leaving a total of 28 gaps, of which there were 14 in Hap1 and 14 in Hap2 ( Supplementary Data Fig. S5 ). Furthermore, we detected that four and six telomeres were missing in the Hap1 and Hap2 assemblies, respectively ( Table 1 ). After closing these gaps by the UL ONT data, we compiled two gap-free ‘Baltico’ haplotypes. For Hap1, the genome size increased from 563 683 187 to 564 479 117 bp and for Hap2 the genome size increased from 564 651 758 to 568 266 215 bp ( Table 1 , Supplementary Data Table S1 ). The N50 size increased from 32 814 548 to 37 578 261 bp for Hap1 and from 33 947 117 to 38 006 131 bp for Hap2. Our assembly results were close to the genome survey result, which exhibited a haplotype genome size of 570 416 832 bp ( Supplementary Data Fig. S6 ). Several chromosomes in different haplotypes were highly divergent in length, such as chromosome 8 (Chr8) and Chr14 ( Supplementary Data Table S2 ). The BUSCO evaluation results were improved after the gap-filling process ( Table 1 ). For Hap1, the BUSCO complete value increased from 97.9 to 98.0% (EM) and from 93.8 to 93.9% (EU). For Hap2, the BUSCO complete value increased from 97.2 to 97.4% (EM) and from 93.2 to 93.7% (EU) ( Supplementary Data Fig. S3A ). The Hi-C heat map showed that errors were absent in the assembly ( Fig. 1E and F ). The k -mer spectral plot revealed that the gap-closing methods had little influence on the haplotype information and the main unique contents were present in the assembly ( Fig. 1G and H ). The evaluated switch error rate is 1.63% in our assembled genome, and the consensus quality value (QV) evaluated based on the short-sequencing data was 44.916 and 49.470 for Hap1 and Hap2, respectively. In addition, the mapping depth of HiFi data showed that there was less bias ( Supplementary Data Fig. S7 ). The published genetic map 72 L of carnation [ 39 ] also shows strong collinearity; the average Pearson correlation coefficient between haplotype genomes was 90.8% ( Supplementary Data Figs S8 and S9 ). Based on these results, we have now compiled two high quality gap-free haplotypes of ‘Baltico’. A total of 41 669 and 40 486 genes were predicted in the two assembled gap-free genomes of Hap1 and Hap2, respectively, and the BUSCO evaluation results showed high complete scores using both EU and EM ( Table 1 , Supplementary Data Fig. S3B ). The ratio of the number of monoexonic genes to the number of multiexonic genes is 0.28 and 0.27 in Hap1 and Hap2, respectively ( Supplementary Data Table S3 ). Among them, 36 253 (87.00%) and 35 117 (86.74%) genes could be annotated by different databases; moreover, 69.36 and 67.88% of the total genes could be annotated by the Pfam database in Hap1 and Hap2, respectively ( Supplementary Data Table S4 ), indicating reasonable and ideal prediction results [ 40 ] and high-quality genome prediction results. Comparative analysis and improvements to the ‘Scarlet Queen’ genome We performed comparative genomics analyses between the published ONT-based SQ genome and the gap-free genomes assembled in this study. There were 45 gaps and 17 unplaced contigs remaining in the SQ genome ( Table 1 ). The larger genome size of SQ may be due to the non-haplotype-aware assembly method, resulting in more redundant sequences. This was reflected by the BUSCO evaluation results as the higher duplication value in both genome assembly and annotation results ( Table 1 , Supplementary Data Fig. S3A and B ). Although the SQ genome predicted more genes than the gap-free genomes, the results of BUSCO evaluation revealed that the SQ genome had a lower quality score ( Table 1 , Supplementary Data Fig. S3B ). Furthermore, the ratio of monoexonic gene numbers to multiexonic gene numbers in SQ is 0.51 which is greater than the normal size of 0.2 ( Supplementary Data Table S3 ). Compared with ‘Baltico’, the SQ genome contained a shorter average gene and CDS length, but longer average exon length; in addition, SQ contained a greater average number of exons per multiexonic gene and a greater number of exons per multiexonic gene ( Supplementary Data Table S3 ). We also found that SQ had a higher proportion of shorter genes and a shorter CDS ( Supplementary Data Fig. S10 ). We checked the position of the centromeres and telomeres in the two gap-free genomes and the SQ genome. The telomeric repeat region was found at both ends of each chromosome in the two gap-free ‘Baltico’ haplotypes ( Fig. 2A , Supplementary Data Table S5 ), while the SQ genome lacked nine telomeres ( Supplementary Data Table S5 ). The candidate centromere regions were identified by detecting the high-order repeat (HOR) regions, and we detected four candidate centromere regions in the two haplotypes of ‘Baltico’, of which two were found in Chr10 and two were found in Chr13 ( Fig. 2B , Supplementary Data Table S6 ). The candidate centromere size ranges from 995 471 to 2 646 945 bp, and the repeat monomer contains the size of 510 and 32 bp for Chr10 and Chr13 respectively ( Fig. 2A and B , Supplementary Data Table S6 ). We also applied the reads-based approach to identify the candidate centromere region and only detected three candidate regions ( Supplementary Data Fig. S11 ). These results demonstrated the unusual features of the centromere region in D. caryophyllus . Furthermore, not only could the centromere region not be identified by HOR in the ‘Baltico’ gap-free genome, but it also could not be identified in the SQ genome. Comparing the two gap-free haplotypes with the SQ genome, we found strong collinearity between them: the percentage of syntenic region was 80.78 and 78.40% for Hap1 and Hap2 while them compared with SQ, respectively ( Supplementary Data Table S7 ). Several chromosomes, such as Chr10, exhibited extremely strong collinearity between SQ and both gap-free haplotypes. However, in Chr3 the percentage of collinearity regions between SQ and Hap1 was 99.79, while it was only 67.22 between SQ and Hap2. In Chr2, the percentage between SQ and Hap1 was 62.58; however, up to 99.91% of collinearity regions were found between SQ and Hap2. There were 1984 and 1989 structural variations between SQ and Hap1 and Hap2, respectively ( Supplementary Data Table S7 , Supplementary Data Fig. S12 ). These findings exhibit the great diversity between the different cultivars. We also explored the nucleotide-binding-site-leucine-rich-repeat (NLR) receptor in the ‘Baltico’ and SQ genomes. In total there were 381 NLRs in SQ and 331 and 366 NLRs in ‘Baltico’ for Hap1 and Hap2, respectively ( Supplementary Data Table S8 ). We suspected that the lower number of NLR genes could be caused by the haplotype-aware assembly or individual differences [ 41 ]. Among the six canonical classes of NLRs, we found that CC-NBARC-LRR occupied the largest proportion, from 65.88 to 70.09%, followed by NBARC-LRR, from 15.41 to 18.90%, in these three genomes. We also explored the distribution patterns of the six canonical classes of NLRs in the genomes. The distribution pattern was comparable in different haplotypes, but small differences could be detected. For example in nearly 4 Mb of Hap1Chr2 and Hap2Chr1, and in nearly 2.5 Mb of Hap1Chr3 and Hap2Chr3, the classes of NLR were different in these regions ( Supplementary Data Fig. S13 ). Correlation between genome architectures and gene expression Previous studies have shown that the lengths of exon, gene, and intron, and TE insertions could affect gene expression levels [ 20 , 42 ]. We therefore investigated whether these factors could influence expression levels in D. caryophyllus based on our gap-free and well-annotated ‘Baltico’ genomes. For the ratio of expressed genes to unexpressed genes, we found that genes with longer lengths of CDS, intron, and gene tended to be expressed in different tissues ( Fig. 3A , Supplementary Data Fig. S14A ), and genes with longer length (whatever the gene, CDS, or intron) exhibited significantly higher expression ratios than shorter genes. Genes with different exon numbers exhibited different expression ratios. We found a trend showing that the expression ratio increases with the number of exons for the genes in different tissues ( Fig. 3B , Supplementary Data Fig. S14B ). Furthermore, we found that genes with two exons exhibited the lowest expression ratio compared with genes with different exon numbers. As there were different expression ratios among different exon numbers, we further checked the possible major function of those genes with a specific number of exons by performing KEGG enrichment analysis. The KEGG enrichment analysis showed that the main classes of ‘BRITE hierarchies’ and ‘metabolism’ were enriched in all groups with different exon numbers. ‘Organismal systems’ was shared among the genes containing 7, 8 and >10 exons; ‘environmental information processing’ was shared between genes containing 1, 2, 3 and >10 exons; ‘genetic information processing’ was present in all groups except for the genes containing seven exons ( Supplementary Data Table S9 ), indicating the preference of gene function in genes with different exon numbers. We identified a total of 70 563 and 76 690 intact TEs in Hap1 and Hap2, respectively ( Supplementary Data Table S10 ). It seems that in D. caryophyllus TEs were more likely to have inserted into a region flanking the gene and these insertions tended to be in the upstream region. Of those TEs that did insert into gene loci, most inserted in introns ( Fig. 3C ). We found that the TE insertions also correlated with gene expression ratios. Genes correlated with TE insertions had significantly lower expression ratios compared with genes uncorrelated to TE insertion. While TEs inserted into the upstream 5 kb and intron region in Hap2, there were no significant differences compared with genes uncorrelated with TE insertions. Genes with TE insertions located in exons had the lowest expression ratios in both haplotypes in all tissues ( Supplementary Data Fig. S14C ), and exhibited significant differences when compared with all other insertion types or non-insertion types ( Fig. 3C ). These results demonstrated that the lengths of CDS, intron, and gene, and the exon numbers of gene and specific TE insertions correlated with the expression ratio. We further checked the expression levels of the expressed genes whose expression ratio may be affected by different genome architectures. In different tissues, we found that Hap1 in leaf and both Hap1 and Hap2 in root exhibited no significant differences in expression level when compared with different lengths of CDS, while other tissues or haplotypes all showed significant differences in expression levels when comparing longer and shorter genes ( Fig. 3D ). The general patterns showed that longer genes, CDSs, and introns tend to have higher expression levels than shorter genes. There was a clear pattern of genes with more exons being expressed at higher levels. This was especially the case for genes with exon numbers greater than five compared with genes with exon numbers less than two, which showed significantly higher expression levels in all tissues and haplotypes ( Fig. 3E ). TEs can also play an important role in the direct or indirect regulation of gene expression. Significantly lower expression levels were observed in both haplotypes in different tissues when there were TE insertions in the exon regions compared with genes without TE insertions and total genes ( Fig. 3F ). In different tissues of Hap1, we found that the expression levels of genes with TE insertions were significantly lower than the expression levels of genes devoid of TE insertions and the total. In Hap2, we found that the expression levels of genes with TE insertions were significantly lower than those of genes devoid of TE insertions and the total; however, there were differences for the root, where the TE insertions in the upstream 5-kb regions and upstream 2-kb regions were not significantly different compared with the total. We suspect that the haplotypes might be affected differently by the TEs, and this further demonstrates the divergence between the haplotypes. As TE insertion correlated to lower expression ratio and level, we were curious about whether TE insertion was correlated to functional preference. Through KEGG enrichment analysis, we could significantly enrich several KEGG terms in the genes with TE insertions in the gene and intron region ( Supplementary Data Table S11 ). These KEGG terms mainly correlated with metabolism processes. We also found that the term ‘00940 phenylpropanoid biosynthesis’ for the conversion of anthocyanidins to anthocyanins was enriched [ 43 ]. We found that only three KEGG terms were significantly enriched in the TE non-insertion gene set ( Supplementary Data Table S11 ); for example, the term ‘00194 photosynthesis proteins’ was enriched, which may suggest that photosynthesis is not suppressed. The GO annotation results also suggested that TE insertions may have function preference. The genes annotated with terms of ‘catalytic activity’ and ‘binding’ from the main class of ‘molecular function’ (MF), and ‘metabolic process’, ‘cellular process’, and ‘response to stimulus’ from the main class of ‘biological process’ (BP) have significantly different percentages among different TE insertion situations ( Supplementary Data Fig. S15 ). For genes annotated with the GO term ‘catalytic activity’, TEs tend to insert more into gene and intron regions and less into exon and downstream 2-kb regions. For genes annotated with the GO term ‘binding’, TEs tended to insert into exon regions; for genes annotated with the GO terms ‘metabolic process’ and ‘cellular process’, TEs were less likely to insert into exon regions. For genes annotated with the GO term ‘response to stimulus’, TEs were less likely to insert into the gene and intron region and more likely to insert into the exon region. Genome decomposition analysis The gap-free genome provided a great opportunity to study the genome architectures. Through genome decomposition analysis (GDA), we divided the T2T haplotypes into seven clusters based on the specific characteristic of sequences under the non-overlapping window size of 10 kb. The proportion of different clusters in the genome was 0.35, 0.01, 0.27, 0.42, 0.11, 22.75, and 76.00% from clusters −1 to 5 ( Fig. 3G ), respectively. A total of 27 features were used to perform cluster analysis and 19 features were used to describe the characteristics of regions that could not be clustered (−1); 16, 18, 21, 11, 20 and 21 features were used to describe the characteristic of clusters from 0 to 5 ( Supplementary Data Table S12 ). Clusters 1 and 2 were classified by the high ratio of tandem repeat regions when compared with other clusters. The main difference between cluster 1 and cluster 2 was that cluster 2 contained more Gypsy and other TEs. We could only detect the centromere candidate region in cluster 2 in Chr10 and cluster 1 in Chr13 with the presence of continuous long blocks. Cluster 0 shared several characteristics with cluster 1, but has the lower AT skew; thus there were proportionately fewer in the whole genome. Cluster 3 was uniquely characterized by its telomere sequences, mainly existing in the head and the end of each chromosome, which could be identified as telomere regions. Clusters 4 and 5 account for most of the genome (98.75%), indicating that the two clusters represent the main structural characteristics. Cluster 4 had a higher CpG island percentage, fewer complex repeats and inverted repeats, but more repeat-rich regions including retrotransposon proteins, putative retrotransposons and TEs. Cluster 5 contained the fewest TEs and the highest numbers of genes and exons, the longest gene length and highest RNA sequence coverage ( Supplementary Data Table S12 ). In terms of distribution on the chromosomes, cluster 5 has more continuous long block regions, but cluster 4 tends to insert into the long blocks of cluster 5 regions ( Fig. 3H , Supplementary Data Fig. S16 ). The telomeres of Hap1Chr9, Hap2Chr9, and Hap2Chr7 were in cluster 2. We speculate that the shorter telomere repeat lengths and other significant features may contribute to the cluster results ( Fig. 3H , Supplementary Data Fig. S16 , Supplementary Data Tables S5 and S12 ). For clusters 4 and 5, irrespective of the different haplotypes or chromosomes, the ratio was stable ( Supplementary Data Fig. S17 ). Other clusters exhibited different proportions among the different haplotypes. For example, Hap2 contained a greater cluster 1 and 3 content than Hap1, particularly in Chr13. Our GDA gave a more visual correlation between the TE contents and genes, the TE-rich regions were very fragmented and inserted into gene regions. Comparative analysis between gap-free haplotypes We identified the syntenic regions and structure variations between the two gap-free haplotypes. The percentage of syntenic regions between Hap1 and Hap2 in the different chromosomes ranged from 55.88 to 99.99% and 57.13 to 99.93%, respectively ( Supplementary Data Table S13 ). A total of 54 inversions and 973 translocations were identified ( Fig. 2A , Supplementary Data Table S13 ). The most divergent chromosomes were Chr2, Chr3, Chr4, Chr8, and Chr11, these five chromosomes had high percentages of inversions (44) and translocations (79) with respect to total variations. We identified a total of 584 486 SNPs and 115 701 indels (57 508 insertions and 58 193 deletions) between Hap1 and Hap2. Among them, 88 689 of these SNPs and indels were distributed in the exon regions, and 34 608 of these SNPs and indels caused missense mutations. Furthermore, we also investigated whether these indels and highly divergent regions (HDRs) were mediated by intact TEs ( Supplementary Data Table S14 ). There were a total of 30 878 indels and HDRs whose length was >40 bp, and we found that 7311 structural variations may be mediated by TE insertions, accounting for 23.67% of the total number. Subsequently, we identified 10 256 alleles that contain at least one SNP variation (20 512 genes, accounting for 24.97% of all annotated genes) between the two haplotypes, alongside 16 036 ‘single alleles’ with identical CDSs between these haplotypes. The CDS similarity of most alleles ranged from 95 to 99% ( Fig. 4A ). Notably, the similarity of genes between the two haplotypes was comparable to that observed between the two cultivars ( Supplementary Data Fig. S18 ). To gain insights into ASE of 10 256 alleles, we conducted an analysis using transcriptome data from the blooming flowers, roots, and young leaves of ‘Baltico’. In the three tissues, we found that about 31.55, 33.94, and 29.28% of the expressed genes showed ASE in flowers, roots, and leaves, respectively. Among them, 2907, 2779, and 2487 alleles showed unbalanced expression in roots, flowers, and leaves, respectively ( Supplementary Data Table S15 ). We found that there were more biallelic expression genes in leaves (70.72%) than in flowers (68.45%) and roots (66.06%), and a greater frequency of increased expression of one allele in roots (30.76%) than in other tissues (28.42 and 26.58% respectively in flowers and leaves) ( Fig. 4B , Supplementary Data Table S15 ). Among the total of 4284 expressed alleles in each of the three tissues, 1379 exhibited significantly different expression ( Fig. 4C ). There were 799, 398, and 597 alleles showing unbalanced expression in roots, leaves, and flowers, respectively. The K a / K s values of monoallelic expression were higher than for alleles of biallelic expression in flower ( Fig. 4D ), leaf, and root ( Supplementary Data Fig. S19 ). The biallelic expression alleles had significantly lower K a /K s values than any other expression type in the three tissues, indicating that most of the biallelic expression alleles were evolutionarily conserved. We noticed that the increased allele expression in three tissues of both haplotypes showed no bias ( Fig. 4E, Supplementary Data Fig. S20 ). GO enrichment analysis revealed that the biallelic expression alleles of flowers were primarily enriched in terms related to ‘RNA binding’, ‘structural molecule activity’, and ‘structural constituent of ribosome’ ( Fig. 4F ). For the alleles showing unbalanced expression alleles in flowers, the terms ‘catalytic activity’, ‘transporter activity’, and ‘transmembrane transporter activity’ were significantly enriched ( Fig. 4G ). In addition, as in flowers, the alleles showing biallelic expression and the alleles showing unbalanced expression in roots and leaves were enriched in similar GO terms, which suggests that there was no significant tissue specificity of differentially expressed alleles ( Supplementary Data Fig. S21 ). To check whether ASE might correlate with TE insertions, we further detected 1098, 964, and 1151 ASEs with specific TE insertions in flower, leaf, and root, accounting for 38–39% of the total ASE numbers ( Table 2 , Supplementary Data Tables S16 and S17 ). The ASEs in different tissues between the haplotypes may play important roles in the formation of carnation traits. For example, one ASE annotated with ‘UDP-D-xylose’ was reported to be involved in the biosynthesis of a branched-chain sugar [ 44 ] and contained a specific DNA/DTH insertion in the upstream 5-kb region in Hap1 ( Supplementary Data Fig. S22A ), the expression of which was significantly lower in both haplotypes of the three tissues. Consistent with the general pattern that TE insertion correlated with lower expression level, alleles with specific TE insertion showed significantly lower expression than alleles without TE insertion.
Discussion The telomere-to-telomere ‘Baltico’ genomes provide a new insight into the genome structure of D. caryophyllus The previous lack of an accurate and gap-free genome presented a significant barrier in tracking and understanding repeat structure, function, and variation in large complex repeats [ 9 ] such as found in the centromere and telomere regions. In this study, we assembled and annotated an accurate, continuous, and complete gap-free D. caryophyllus genome based on high-depth long-read sequencing data and state-of-the art assembly methods. This finished genome provided an opportunity to analyze the genome-scale repeat content and identify genome architecture. The centromeric region is important for faithful chromosomal segregation in mitosis and meiosis, and deletion of the centromere or mutation of critical kinetochore proteins results in chromosome loss [ 45 , 46 ]. Generally, centromeres in most higher eukaryotic organisms are composed of long arrays of satellite DNA [ 47 , 48 ], which can be identified by the abundance of a repeat monomer (often >10 000 copies per chromosome) [ 49 ]. The centromeric region size can range from ~500 kb to several megabases [ 50 , 51 ] and the length of repeat monomers is ~180 bp, also could found in a broader length in animals [ 49 ]. In some plants and animals, a single chromosome, or even the entire chromosome complement, lacks HOR arrays [ 46 , 52–54 ]. For example, there are five centromeres in potato, in which the HORs could not be identified; six different repeat monomers were identified and four of the centromeric repeats were amplified by the retrotransposon-related sequences [ 54 ], which provides great genetic diversity in the centromere among different species. In the three carnation genomes (two haplotypes of ‘Baltico’ and SQ), we detected only four candidate centromere regions by identifying the HORs in ‘Baltico’. Bioinformatic analysis of our gap-free genome led to the conclusion that the carnation’s centromere has specific characteristics that cannot be identified by HORs alone. The telomere regions cap the ends of eukaryotic chromosomes to protect them from deterioration and prevent a DNA damage response [ 55 ], and consist of a tandem repeat [ 56 ]. The difference in telomere length among plants is correlated with certain phenotypes [ 57 , 58 ]. For example, telomere length variation may be associated with flowering time [ 59 ]. Our gap-free genome provided a valuable resource for analyzing flowering time correlated with telomere region length in the Caryophyllales. Correlation between gene expression, gene structure, and transposable element insertion More and more studies are focusing on ASE, such as on the types of ASE, the causes of ASE and the regulatory mechanisms involved in the formation of important traits [ 60 , 61 ]. ASE has been reported to affect individual traits such as color [ 20 ] and resistance [ 22 , 24 ]. We found a large amount of ASE (29.28–33.94%) in flower, leaf, and root. These ASEs were divided into four different types and enriched in different terms, suggesting that different classes of ASEs may be involved in different regulatory pathways. The exon numbers and gene and intron lengths have been reported to affect gene expression levels, showing that genes with longer intron lengths, more exons, or with TE insertions are more likely to exhibit higher expression levels than genes with shorter lengths or genes without TE insertions [ 42 ]. Researchers found that high levels of expression tended to be associated with shorter mRNA lengths [ 62 ]. However, in our case, based on the gap-free and well-annotated genomes, we found that the genes with longer CDSs, introns, and genes would tend to be expressed and at a higher expression level than shorter ones ( Fig. 3A and D ). Former studies demonstrated that shorter genes correlate with the stimuli [ 63 ], and the longer genes are often associated with important biological processes [ 64 , 65 ]. Our results reveal that the gene expression ratios and expression levels are correlated with TE insertions. Genes without a TE insertion have higher gene expression ratios and levels than genes with TE insertions. It seems that TEs are mainly characterized as repressive elements in D. caryophyllus ( Fig. 4C and F ). In particular, TEs inserted into the exon regions significantly correlate with downregulation or gene silencing. This downregulation process correlated with TEs may be achieved by the specific insertion disrupting the genes normal structure [ 66 ]. In our case, we found that specific DNA/DTH insertions in the flanking gene regions correlated to significantly lower expression ( Supplementary Data Fig. S22B ), indicating that the allele imbalance could be caused by the specific TE insertions.
Contributed equally to this work. Abstract Carnation ( Dianthus caryophyllus ) is one of the most valuable commercial flowers, due to its richness of color and form, and its excellent storage and vase life. The diverse demands of the market require faster breeding in carnations. A full understanding of carnations is therefore required to guide the direction of breeding. Hence, we assembled the haplotype-resolved gap-free carnation genome of the variety ‘Baltico’, which is the most common white standard variety worldwide. Based on high-depth HiFi, ultra-long nanopore, and Hi-C sequencing data, we assembled the telomere-to-telomere (T2T) genome to be 564 479 117 and 568 266 215 bp for the two haplotypes Hap1 and Hap2, respectively. This T2T genome exhibited great improvement in genome assembly and annotation results compared with the former version. The improvements were seen when different approaches to evaluation were used. Our T2T genome first informs the analysis of the telomere and centromere region, enabling us to speculate about specific centromere characteristics that cannot be identified by high-order repeats in carnations. We analyzed allele-specific expression in three tissues and the relationship between genome architecture and gene expression in the haplotypes. This demonstrated that the length of the genes, coding sequences, and introns, the exon numbers and the transposable element insertions correlate with gene expression ratios and levels. The insertions of transposable elements repress expression in gene regulatory networks in carnation. This gap-free finished T2T carnation genome provides a valuable resource to illustrate the genome characteristics and for functional genomics analysis in further studies and molecular breeding.
Supplementary Material
Acknowledgements This work was funded by the National Natural Science Foundation of China (32002074); the Shenzhen Fundamental Research Program (JCYJ20220818103212025); Major Scientific Research Tasks, Kunpeng Institute of Modern Agriculture at Foshan (KIMA-ZDKY2022004); the Scientific Research Foundation for the Principal Investigator, Kunpeng Institute of Modern Agriculture at Foshan (KIMA-QD2022004); and the Chinese Academy of Agricultural Sciences Elite Youth Program (110243160001007) to Z.W. This work was also supported by the Innovation Program of Chinese Academy of Agricultural Sciences, Science Technology and Innovation Commission of Shenzhen Municipality of China (ZDSYS20200811142605017). Author Contributions X.Z., Z.W., and W.R. designed the whole research. L.Lan and X.Z. performed the T2T genome assembly and genome annotation. L.Leng, W.L., Y.R., and X.F. guided the ASE analysis and the correlation between genome architecture and gene expression level. L.Lan, L.Leng, and W.L. performed the GDA. L.Lan, L.Leng, W.L., and X.Z. wrote the first manuscript. L.Lan, L.Leng, W.L., Y.R., W.R., X.F., Z.W., and X.Z. edited and approved the final manuscript. Data availability statement The genome assembly sequences, gene annotations and transcriptome data are publicly available in the China National GeneBank ( https://www.cngb.org/ ) under project number CNP0004461. Conflict of interest The authors declare that they have no competing interests. Supplementary information Supplementary data is available at Horticulture Research online.
CC BY
no
2024-01-16 23:43:50
Hortic Res. 2023 Nov 27; 11(1):uhad244
oa_package/2f/d6/PMC10788775.tar.gz
PMC10788776
38226141
Introduction Intervertebral disc herniation (IVDH) is one of the major causes of back pain and disability with high morbidity. 1,2 The human spine consists of five specific sections – cervical, thoracic, lumbar, sacrum, and coccyx. 3 Cervical, thoracic and lumbar sections are mobile due to the presence of intervertebral discs which are joints in between adjacent vertebrae. An intervertebral disc consists of nucleus pulpous and annulus fibrosus. Nucleus pulposus (NP) is a gelatinous core of the disc and is surrounded by thick fibrous annulus fibrosus (AF). 3 IVDH is caused by degeneration of NP leading to loss of integrity, fragmentation and subsequent herniation of NP material through disrupted AF into the spinal canal. Although the exact mechanism of NP degeneration is mostly unknown it is known that several factors such as repetitive trauma with high mechanical load (wear and tear) and aging contribute to pathophysiology of intervertebral disc degeneration. 4,5 The loss of integrity may be influenced or accelerated by the disorganization in some extracellular matrix (ECM) proteins such as the collagen fibers. 6,7 Also, it is known that herniated intervertebral discs might become calcified, especially in the thoracic spine and rarely in other segments making the treatment of the thoracic herniation more difficult. 8 The symptoms of disc herniation depend on the location and are mainly attributed to the compression of the spinal cord and/or nerve roots. Typically they include pain in the corresponding level of the spine (neck pain, back pain or lower back pain) radiating to upper or lower extremities. Numbness and weakness in arms and legs can also develop if neural structures are compressed to a significant degree. The diagnosis of the IVDH is mainly based on medical imaging instruments that are considered to be relatively expensive. 9 Magnetic resonance imaging (MRI) is one of them and the most effective one due to the fact that it has good soft-tissue visualization capacity and besides, patients are not exposed to radiation. 9,10 Computed tomography (CT) (contrast-enhanced CT, non-contrast CT, or multi-detector CT) is used for detecting intervertebral disc herniation, since it can give information about the size and the shape of the herniated disc. 9,11 Myelography is another method that is old but useful for the diagnosis of root compression. 12 Also, X-ray imaging is used for the cervical or lumbar disc herniation, since it is a cheap and easily accessible method. 9 Using CT, myelography and X-ray imaging modalities for the diagnosis of cervical or lumbar disc herniation, patients are subjected to radiation. 9,11–13 Scanning acoustic microscopy (SAM) provides information, with micrometer resolution, about the morphology and mechanical properties of biological tissues simultaneously. Focused high-frequency ultrasound is used in SAM with major advantages of high speed in obtaining the two-dimensional images and immediate scanning of the specimen without special preparation and staining. As a result, either the speed of sound (SOS) in specimen tissues 14–18 or acoustic impedance 19,20 of samples can be calculated and mapped in 2-dimensions. Scanning electron microscopy (SEM) is another imaging tool that can characterize samples like tissue sections, cells or nanoparticles by attaining information about the morphology, structure and composition. 21 Images are obtained by the detection of a variety of signals such as secondary electrons and backscattered electrons that are frequently used for biological samples' imaging, alongside X-rays and cathodoluminescence. 22 Energy-dispersive spectroscopy (EDS) is used for the semi-quantitative and qualitative analysis of the chemical elements of samples by the analysis of two types of radiations which are continuous radiation that results in the formation of the background of the measurement and the characteristic radiation of a specific wavelength resulting in the detection of the elemental composition. 23–25 We aimed to evaluate intervertebral disc herniation by examining AF and NP tissues of herniated human cervical and lumbar discs. We characterized the AF and NP tissues by using scanning acoustic microscopy (SAM), scanning electron microscopy (SEM) and energy-dispersive spectroscopy (EDS). SAM provided information on the structural and mechanical properties of samples with micrometer resolution, while, SEM provided morphological information about the herniated tissues with higher resolution. EDS provided chemical information about the herniated AF and NP tissues by element composition analysis. Consequently, we assume that combining these techniques in clinics will help surgeons to designate the altered AF and NP tissues of herniation patients with micrometer resolution, which will minimize the possibility of new herniation formation due to a segment not excised during the surgery.
Materials and methods Ethics declaration This study was ethically approved by Istanbul Biruni University Ethics Committee (Number: 2020/44-11) and informed consent was obtained from each participant. All experiments were performed in accordance with relevant guidelines and regulations. Specimens All patients were operated under general anesthesia. The standard surgical technique for discectomy was utilized for obtaining the specimens. Briefly, for the lumbar spine, the access was established through a midline incision, paravertebral muscle dissection from the posterior aspect of the spine and facetectomy at the index level. Once the exiting and traversing nerve roots were identified, the disc joint was exposed in the Kambin triangle. For the cervical spine, right sided transverse incision was performed and the intervertebral disc was approached initially by dissection though the medial side of the sternocleidomastoid muscle and later between the carotid artery and esophagus. Once the disc was exposed and verified, annulus fibrosus was cut with a scalpel and a rectangular specimen was obtained. The nucleus pulposus specimen was taken by using surgical curettes and pituitary rongeurs through the established window. Care was taken to preserve the mechanical integrity of the specimens. Electrocautery was not employed in order to avoid thermal damage to the tissues. Scanning acoustic microscopy Scanning acoustic microscope (AMS-50SI) used in acoustic impedance experiments is developed by Honda Electronics (Toyohashi, Japan). Acoustic impedance (AI) mode chosen for the characterization is shown in Fig. 8 . 80 MHz transducer was the ultrasonic signal generator and receiver in this study. 80 MHz transducer has a focal length of 1.5 mm with a spot size of 17 μm. The coupling medium between the quartz lens and the substrate is distilled water. The generated ultrasonic signals are scanned by the X – Y stage and the reflected signals both from the reference and the target material are compared and analyzed to generate the intensity and acoustic impedance maps of the region of interest with 300 × 300 sampling points with a lateral resolution of approximately 20 μm. 45 Scanning electron microscopy and energy dispersive spectroscopy AF and NP tissue samples excised from the patients were kneaded in sizes not exceeding 0.5 cm and the kneaded tissue pieces were exchanged 10 times and were dried under carbon dioxide using acetone in a critical point dryer, Leica© EM CPD300 Critical Point Dryer device, to remove the water contained in them, and the critical drying was completed at 40 °C without damaging the tissues. This process was done to prevent damage to tissues because of sudden evaporation of water when exposed to high vacuum. The samples were placed on carbon tapes adhered to the aluminum mount for SEM imaging and EDS analysis. Approximately 70 nm carbon coating was applied with 3 flash pulses in the Leica EM ACE 200 device before the tissues were taken from the critical dryer and placed in SEM. With this process, a conductive layer is created on the surface of the tissues, preventing the formation of an electron cloud. By this layer, the number of electrons coming to the secondary electron detector and the number of X-rays coming to the EDS detector increase and therefore, clearer SEM images are obtained and more reliable and accurate measurements are done in EDS analysis. Another process for increasing the number of signals per second in EDS analysis is to increase the voltage applied to the filament, that increases the speed of the electrons. These primarily accelerated electrons carry the electrons in the 2nd and 3rd orbits of the atoms within tissues to the upper energy level orbits, and the X-rays emitted by them have characteristic features. To obtain 5000 or more counts per second from an insulating surface, the spot aperture of the beam was increased by applying 20 kV (spot size 5.5 for Thermo ScientificTM Quattro ESEM). The visualization of the tissues was done under a high vacuum. Elemental analyses were made with the EDAX brand Energy Dispersive Spectroscopy (EDS) detector. Before analyses, the EDAX EDS device was calibrated with Al and Cu standard samples with APEX software automatically, and EDS analysis was performed for each tissue sample from a suitable area at 1000 times magnification, 10 mm working distance for 14 minutes. The spectra of carbon (C), nitrogen (N), oxygen (O), sodium (Na), sulfur (S), and calcium (Ca) were measured in percentage by weight. The purpose of performing area analysis instead of point or line analysis was to calculate the average elemental weights of different structures within the tissue. By this way, the results obtained from a greater area were more significant and accurate. Statistical analysis Using the GraphPad (Prism8) program, statistical analysis of acoustic impedance values of AF and NP tissues of both genders were performed. The graphics of the analyzed data were generated from the GraphPad (Prism8) program. Unpaired-one-tailed Student's t -test ( t -test with unequal variance) was used for the determination of statistically significant differences in acoustic impedance measurements among different genders, tissue types, calcific-rich and less calcific areas in samples and the level of statistical significance level was set to p < 0.05.
Results Scanning acoustic microscopy results AF and NP tissues of 15 patients were sliced cross-sectionally for SAM studies. Table 1 shows the average acoustic impedance values of all the tissues examined. 9 patients were female and 6 patients were male of varying ages. Each acoustic impedance value in Table 1 is the average measured over the complete specimen. The acoustic impedance maps of the herniated AF and NP tissues are obtained with SAM in acoustic impedance (AI) mode. SAM images were constructed by collecting the reflections of ultrasound signals both from the reference and front surfaces of the slices. Fig. 1 shows the acoustic impedance distribution of the nucleus pulposus tissue sample obtained from a male patient and is an example of the images obtained in AI mode. MATLAB program was used to map acoustic impedance values for each sample. Fig. 2 shows the acoustic impedance values of male and female annulus fibrosus (AF) and nucleus pulposus (NP) tissues individually with their uncertainty values. The acoustic impedance values are presented in a scatter graph generated by the GraphPad (Prism8) software. Scanning electron microscopy and energy dispersive spectroscopy results AF and NP tissue samples are placed inside the microscope for both SEM and EDS. Fig. 3 shows SEM images of 2 male NP tissues with low and high atomic% calcium. In Fig. 4 SEM images of a female patient's AF and NP tissues of C6-7 (between the sixth and seventh cerebral vertebrae) intervertebral disc are shown. The elemental compositions of the tissues, that were also investigated with SEM as in Fig. 3 and 4 , are presented in Fig. 5 . Table 2 shows atomic percentage results of elements of AF and NP tissues with their uncertainty values obtained by energy dispersive spectroscopy (EDS) analysis. Gender and location based elemental composition percentages presented in Table 2 , for both the AF and NP tissues, are shown in Fig. 6 . Statistical analysis results The statistical analyses of the acoustic impedance values were done with the GraphPad (Prism8) program for all the AF and NP tissues and for tissues that were examined also with EDS and SEM imaging. The graphs generated are presented in Fig. 7 .
Discussion SAM is successful in monitoring the mechanical properties of AF and NP tissues, by calculating the acoustic impedance values ( Table 1 ). As can be seen in Table 1 and Fig. 2 , in most of the patients, acoustic impedance values of AF tissues are greater than those of NP tissues, due to the fact that AF is a fibrocartilaginous tissue composed of highly cross-linked collagen fibrils, whereas NP is more amorphous with a small percentage of randomly oriented fibrils. 26 Fig. 4 shows the difference in structures of AF and NP. However, in some of the patients NP tissues are stiffer than AF tissues. This can be a result of higher calcium level in NP tissues, as can be observed in patient 5 ( Table 2 and Fig. 5B ). This male patient has a stiffer NP tissue from L3-4 disc with a high acoustic impedance value ( Table 1 ). Calcium deposits in the spine can be due to aging, infections or some treatments. Even though, calcifications are found to be the biomarker of disc degeneration, the mechanisms for calcium deposits are rarely studied. 27 Besides, even though intervertebral disc calcification (IDC) is common in elderly people, in males, the frequency of IDC was found to be higher. 28 In this study, we did not know the ages of the patients, therefore, in a further study, age and gender correlations with calcification can be studied since it is known that age has a significant impact on herniation in general and in IVDH. 29–31 In this study, for EDS analysis the AF and NP tissues were dried with CO 2 gas, which is used in critical drying and opens the pores on the tissue surface, allowing the water inside the tissue to escape. 32 During this process, calcium (Ca) is transported to the surface. 32 Ca that rises to the surface under high CO 2 pressure undergoes carbonization. 32,33 Since the binding energy of Ca is lower than the binding energy of nitrogen (N), oxygen tends to bond with Ca. 34 This explains the increase in the amount of Ca and the decrease in the amount of N in the EDS analysis. 35 Since all the tissues go under the same process before the EDS analysis, the effects of the CO 2 would be similar on the tissues, therefore, we assumed that this kind of difference can be ignored. Acoustic impedance maps ( Fig. 1 ) were generated from the intensity maps of samples in AI mode of SAM. These intensity images of stiffer surfaces were brighter than the ones of softer surfaces, due to the fact that the brightness of intensity images directly depends on the ultrasound intensity reflected, which is greater from a stiffer surface. Therefore, a different component, such as calcification, with different elasticity value can be distinguished within tissues with micrometer resolution by SAM. The calcium level in patient 5 was found to be the highest with EDS analysis, as can be seen in Table 2 and Fig. 5B . This increased level of calcification made this tissue stiffer and therefore, increased its acoustic impedance value. Oxygen and sodium levels were also higher for this patient when compared with others. On the other hand, carbon and nitrogen levels were lower than the ones of other patients' tissues. Disc degeneration can be due to nutrient transport disruption which can result from many phenomena such as lack of motion, high frequency loading, disk injury, aging or smoking. 36 However, the mechanism behind nutrition delivery is very complex. Glucose is the main energy supply for the disc. 37 Oxygen is vital for proper cell function. 38 Mineralization was detected in degenerated discs, especially in the specimens that exhibited calcification, 39 therefore, higher sodium level in patient 5 may be a result of this. For patient 5 with highly calcified NP tissue, we assume that all element levels differ from other patients' element levels due to greatly altered nutrition transport and can further be studied with more patients. In Fig. 2 , the female NP tissues' acoustic impedance values are not as homogenously distributed as the values of other tissues of male patients and female AF tissues, making the standard deviation (error bar) greater. This might result from their menopausal status, which changes estrogen level abruptly and some studies show that menopause increases IVDH. 40,41 Also, it is shown that menopausal status is correlated with lumbar disc's mineral density. 42 In this study, we did not have the age information of the patients and the menopausal status of the female patients, because of that, the correlation between the acoustic impedance values of the female AF and NP tissues and the calcification depending on the menopausal status could not be investigated but should be further examined. In Fig. 3 , a highly calcified NP tissue of patient 5 was compared with another patient's (Patient 1) NP tissue. It is also obvious in this figure that patient 5 has very high calcification level. Table 1 shows herniated disc positions in this study as C3-4, C6-7, L3-4, L4-5 and L5-S1. Most of the patients have herniation in the lower lumbar spine, especially between the fourth and fifth lumbar vertebrae and between the fifth lumbar vertebra and the first sacral vertebrae (the L4-5 and L5-S1 levels), which is in agreement with literature. 43 For a correlation between position and acoustic impedance value the number of patients, both male and female, has to be increased. As can be seen in Fig. 7 , the acoustic impedance values of all data were scientifically non-significant for AF and NP tissues ( Fig. 7A ), this can be a result of the fact that the gender distribution was not equal in the data. However, when the acoustic impedance values AF and NP tissues, that were examined with SEM and EDS, were analyzed, it was seen that the impedance values of AF tissues were significantly higher than the NP tissues ( Fig. 7B ). The acoustic impedance values of female tissues were non-significant ( Fig. 7C ), whereas the acoustic impedance values of male tissues were highly significant ( Fig. 7D ) indicating that the male AF tissues' impedance values were significantly higher than male NP tissues' impedance values. There was no scientifically significant difference between the female and male NP tissues' acoustic impedance values ( Fig. 7E ). On the other hand, the male AF tissues' acoustic impedance values were significantly higher than the female AF tissues' impedance values ( Fig. 7F ). Since it is known that disorganization of collagen fibers leads to disc degeneration, therefore may lead to or accelerate the herniation of the intervertebral disc, the collagen protein organizations can also be examined with imaging approaches. 44
Conclusions In this study, we aimed to examine AF and NP tissues of female and male patients with cervical and lumbar intervertebral disc herniation, by SAM and SEM-EDS. By SAM, we managed to observe acoustic property variations within AF and NP tissues from female and male patients by obtaining acoustic impedance maps. We determined higher resolution images together with chemical information of the tissues by SEM-EDS analysis. Higher calcification in a tissue caused higher acoustic impedance value obtained by SAM and altered element levels obtained by SEM-EDS. Consequently, we can say that the AF and NP tissue variations in intervertebral disc herniation patients are observed by SAM and SEM-EDS for the first time and this achievement may result in combining these techniques in the future for the investigation and removal of the herniated AF and NP tissues with micrometer resolution.
Intervertebral disc herniation (IVDH) is observed in humans as a result of the alteration of annulus fibrous (AF) and nucleus pulposus (NP) tissue compositions in intervertebral discs. In this study, we studied the feasibility of scanning acoustic microscopy (SAM), scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) in characterizing the herniated segments of AF and NP tissues from male and female patients. SAM determined the acoustic property variations in AF and NP tissues by calculating the acoustic impedance values of samples of 15 patients. SEM obtained higher resolution images and EDS made elemental analysis of the specimen. Consequently, we suggest that these techniques have the potential to be combined for the investigation and removal of the disrupted AF and NP tissues with micrometer resolution in clinics. Scanning acoustic microscopy, scanning electron microscopy and energy dispersive spectroscopy of annulus fibrous and nucleus pulposus tissues from patients with intervertebral disc herniation were performed, to analyse and determine the acoustic property variations in the tissues.
Author contributions B. T., B. D. and S. B. conducted the experiments, B. T. and B. D. analysed the results, K. A. and C. O. performed the excisions, M. B. U. acquired funding and supervised. All authors reviewed the manuscript. Conflicts of interest There are no conflicts to declare. Supplementary Material
This study was supported by a grant from the Ministry of Development of Turkey (Project Number: 2009K120520). The authors thank all the volunteers contributed to this project.
CC BY
no
2024-01-16 23:43:50
RSC Adv.; 14(4):2603-2609
oa_package/5c/d9/PMC10788776.tar.gz
PMC10788783
38081060
Introduction Electroencephalography (EEG) is a well-known technique for non-invasive brain monitoring with applications in research and medicine. Most EEG systems use conductive gel to reduce the impedance between the scalp and the electrodes (Kappenman and Luck 2010 ), limiting the practicality of EEG recording. Preparing gelled systems requires anywhere from tens of minutes to over an hour of skilled preparation (Chu 2015 ). Moreover, the gel dries during sessions, leading to degraded signal quality (Lopez-Gordo et al 2014 ). The gel can also bridge nearby electrodes, which limits the feasibility of high-density montages (Alschuler et al 2014 ). Finally, properly removing the gel from the cap and hair is highly inconvenient, and the gels can cause skin irritation in some participants (Li et al 2017 , Hsieh et al 2022 ). A high-quality system that does not require gel (i.e. a ‘dry’ system) could increase the convenience and applicability of EEG. However, dry electrodes generally exhibit higher and more variable skin-electrode impedance than gelled electrodes, lowering signal quality and increasing sensitivity to artifacts (Li et al 2017 ). Due to the limitations of current dry electrodes, there has been significant focus on novel material approaches to reduce the impedance and improve the interface between the skin and the dry electrodes (Yang et al 2022 ), including materials such as platinum (Liu et al 2019 ), graphene (Shao et al 2019 , Ko et al 2021 , Zhai et al 2022 ), hydrogels (Alba et al 2010 , Li et al 2021 , 2023 ), and conductive textiles (Lin et al 2011 ). However, most materials have failed to gain widespread adoption due to comfort, usability, and cost issues. Additional limitations of existing dry electrode materials include difficulty and expense of manufacturing, low biocompatibility, and fragility (Radüntz 2018 , Li et al 2020 , Yang et al 2022 ). Therefore, commercially popular dry electrodes typically consist of conductive metals or polymers coated in silver, gold, Ag/AgCl, or nickel, such as g.SAHARA by g.tec GmbH and Waveguard touch by ANT Neuro (Hinrichs et al 2020 ). To overcome the impedance limitations of existing dry electrode materials, popular dry systems have converged on the common design of a rigid frame and large textured or spiked electrodes. This design exerts high pressure on the scalp and has a large contact area per electrode, which is effective at reducing skin-electrode impedance (Li et al 2017 , Fiedler et al 2018 ). However, these designs can become uncomfortable soon after application (Fiedler et al 2014 , Li et al 2020 ). Large dry electrodes and housings are also bulky (Kübler et al 2014 ), limiting electrode density to levels that may be insufficient for more advanced analysis techniques such as source reconstruction and independent component analysis (Puce and Hämäläinen 2017 , Michel and Brunet 2019 ). Additionally, these housings do not adequately fit all scalp sizes (Radüntz 2018 ) and complicate the use of EEG in the context of multimodal recording and neurostimulation techniques (e.g. EEG and transcranial magnetic stimulation). Recently, we have demonstrated a novel material and manufacturing approach for dry EEG electrode arrays, consisting of 3D mini-pillars fabricated from Ti 3 C 2 T x -cellulose aerogels (i.e. MXtrodes). In the same work, we demonstrated the ability to record resting-state EEG with temporal and spectral characteristics comparable to gelled Ag/AgCl electrodes without applying excessive pressure, without a rigid frame, and with minimal scalp preparation. MXtrodes are safe and easy to manufacture, have excellent biocompatibility, are soft and durable, and can be scalably manufactured in high-density arrays (Driscoll et al 2021 ). These advantages address many of the limitations of existing dry technologies. However, additional rigorous quantitative evaluation of the EEG signals is needed to evaluate and benchmark MXtrodes against existing, trusted sensors. This validation will help establish MXtrodes for research and future clinical applications. Additionally, this evaluation may help support the advantages of high-density EEG configurations against conventional cm-scale single electrodes. EEG sensors are usually validated by comparing them to a trusted standard because there is no ‘ground truth’ scalp EEG signal to serve as a baseline for evaluation (Zrenner et al 2020 , Luck 2022 ). Gelled Ag/AgCl electrodes are often used as this standard (Guger et al 2012 , Oliveira et al 2016 , Kam et al 2019 , Hinrichs et al 2020 ). When making these comparisons, researchers must choose whether to record each electrode type from the same location at different times or simultaneously but from different locations. We deemed simultaneous recording superior for this study because instantaneous comparisons between simultaneously recorded signals can be made. In contrast, it is only possible to compare signals recorded at different times by their average activity (Pourahmad and Mahnam 2016 ). Furthermore, EEG processes are nonstationary, such that even when the underlying EEG timeseries are qualitatively different, their long-term averages can be the same (He 2014 ). Moreover, one important limitation of previous EEG electrode validation studies and many current dry and wet systems is that they require active pre-amplification to get high-quality timeseries (Guger et al 2012 , Lopez-Gordo et al 2014 , ActiCAP Slim/ActiCAP Snap—Brain Vision 2020 ). Many current dry EEG solutions also rely on real-time cleaning and artifact rejection to obtain usable data (DSI-24 n.d. , Thirty Two Channel Wireless EEG Head Cap System—FLEX Saline n.d. ). Our solution uses neither and is a true comparison to the gold standard, gelled, Ag/AgCl cup electrodes. Therefore, to compare average and instantaneous signal similarity between and within electrode types, we recorded EEG signals from dry MXtrodes and gelled Ag/AgCl electrodes simultaneously. We investigated root mean square (RMS) amplitude, spectral power, and spectral coherence over short time windows and frequency content between electrodes. We also calculated timeseries correlations for wideband and canonical EEG narrowbands to assess the instantaneous similarity between electrodes. To explore endogenous neural signals, we computed event-related potentials (ERPs) in response to a simple vigilance task. A particular advantage of dry MXtrodes is the ease of fabricating them in high-density configurations (Driscoll et al 2021 ). Accordingly, we recorded from arrays with inter-MXtrode distances of 6 mm, less than half the spacing between electrodes in the 10-5 system (Oostenveld and Praamstra 2001 ), which allowed us to compare high-density EEG signals collected on MXtrodes at various inter-electrode distances.
Methods Participants The Drexel University Institutional Review Board approved the study under protocol #1904007140. We collected data from ten participants (six male). The average age of participants was 21.89 years (SD = 2.67). We recruited participants using fliers and re-contacting participants who had been enrolled in previous experiments. Participants were compensated $25 for their time. Sessions lasted approximately two hours. We excluded two participants from the analyses: one due to overall poor data quality on all channels due to a damaged adapter and another due to poor Ag/AgCl electrode signal quality, leaving eight participants in the analyses. An additional eight participants were included in the impedance sessions (described in section 2.4 ), and four more participants were recruited for the scalp treatment and through-hair recording study (described in section 2.11 ). Studies that compare basic signal properties between electrode types generally find this number of participants sufficient (Li et al 2020 ). MXtrode array fabrication We fabricated dry EEG arrays following previously published protocols (Driscoll et al 2021 ). Briefly, we patterned the MXtrode array layout onto a nonwoven, hydroentangled cellulose-polyester blend substrate using a CO 2 laser. We then infused the cellulose-polyester substrate by hand with a Ti 3 C 2 T x MXene dispersion at 20 mg ml −1 obtained from Murata Manufacturing Co. (Kyoto, Japan), which wicked into the fibers and formed a conductive composite. We fabricated the 3D mini-pillars by cutting cellulose aerogels to form cylindrical pillars, similarly infusing Ti 3 C 2 T x , and placing them at electrode locations on the patterned substrate. (figure 1 (A)). After drying in a vacuum oven (80 °C, 25 mmHg), the pillars were strongly bonded to the laser-patterned substrate through MXene only (without additional adhesives). Next, we encapsulated the arrays in a ∼1 mm-thick layer of polydimethylsiloxane (PDMS), followed by degassing and curing. Finally, we trimmed the mini-pillars to a uniform height of 5 mm using a vibratome (Leica Biosystems) to expose the electrode contacts. Preparation We prepared participants’ foreheads using the method described in (Murphy et al 2020a ): wiping the area with alcohol, gently rubbing with an exfoliating pad, and rewetting the area with 0.9% concentration saline (Cytiva). We then centered two 4 × 4 square arrays of MXtrode mini-pillar electrodes embedded in a PDMS matrix bilaterally over approximate F3 and F4 positions of the international 10-5 system (figure 1 (B)). To find F3/F4, we measured the vertex position halfway from nasion to inion and marked positions 12 cm forward and 3 cm to either side from the vertex. If these positions caused the arrays to overlap the hairline, we positioned the arrays immediately below the hairline instead. We coated the PDMS border of each array in silicone spray adhesive (Hollister Adapt Medical Adhesive Spray) to keep the arrays in place temporarily. Next, we placed elastic netting (Surgilast Tubular Elastic Dressing Retainer, Size 6) over the participant’s head to hold the arrays and provide light pressure. Then, we inserted passive Ag/AgCl cup electrodes (Technomed Disposable EEG Cup Electrodes) at several positions under the netting. We placed one Ag/AgCl electrode vertically centered to the outside of each array in three of the eight participants in the analyses. In the remaining five of the eight analyzed participants, we placed two Ag/AgCl electrodes to the outside of the array instead, equally vertically spaced along it. In these participants, we placed another Ag/AgCl electrode at approximately the Iz position, which was not included in the present analyses. The setup procedure took about 5 min per participant. After removal, the MXtrodes left temporary indents (figure 1 (C)). We reused four total bifrontal arrays across the eight participants. The average reuse count of the bifrontal arrays was 2.5 uses (SD = 1.64). Immediately after removal, we disinfected the arrays with alcohol wipes. The MXtrode arrays, Ag/AgCl electrodes, and ground/reference electrodes were connected simultaneously to an Intan RHD Recording System (Intan Technologies, USA) using two RHD 32-Ch recording headstages modified to remove a short between the reference and ground, which allows for separate reference and ground electrodes. Two Natus Disposable Adhesive Disc electrodes served as ground and reference on the left and right mastoids, respectively. Recording We recorded EEG in a shielded room using a passive Intan RHD amplifier set to a sampling rate of 2 kHz. We collected the data used in our analyses as part of a larger cognitive experiment with several parts. Participants first performed 2 min blocks of alternating eyes-open and eyes-closed resting state, in which they were asked to sit quietly and remain relaxed without becoming drowsy. Participants then completed blocks of approximately 10 min of the psychomotor vigilance task (PVT) (Mentzelopoulos et al 2023 ), 12 min of the attention network task (ANT; Fan et al 2002 ), 10 min of the N-back task (Gevins and Cutillo 1993 ), and finally 10 additional minutes of PVT. In each trial of the PVT, a fixation cross appeared at the center of the screen. After a variable delay (2–11 s), a red dot (the probe) appeared at the center of the screen and remained for two seconds or until a button was pressed (supplementary figure 1). The next trial then began. Each 10 min block of the PVT averaged 85 trials for 170 total trials per participant. We only analyzed the first block of PVT data in this study. We chose this block to mitigate the influence of fatigue on our results (Rich et al 2023 ). The PVT task measured the simplest event-related cognitive process in the task battery, cued attention during sustained vigilance (Kribbs and Dinges 1994 ), and thus required the least eye movement. A single block of PVT data (10 min) is well beyond the durations commonly used for continuous comparisons (often two to four minutes) and contains enough artifact-free trials to enable event-related analyses. We measured the MXtrode-skin impedance with the Intan RHD amplifier during the sessions. However, we found significant discrepancies between the impedance measurements from the Intan compared to a benchtop potentiostat (Gamry Ref. 600), particularly at test frequencies <100 Hz (supplementary table 1). The Intan’s minimum recommended test frequency for accurate impedance measurements is 1 kHz, which may explain the inaccuracy at lower frequencies (Foy and Harrison 2021 ). While 1 kHz is the appropriate test frequency for intracranial microelectrodes with impedance >103 Ω, in commercial EEG amplifiers, the test frequency is typically <100 Hz, especially for resistances approximating the target impedance for EEG electrodes (∼10 kΩ or less; Food & Drug Administration 2020 , Blanch 2022 , Kinnunen and Simonaho 2022 ). Therefore, we conducted separate impedance measurements in eight healthy volunteers (four male). The average age of participants was 26.38 years (SD = 5.36). Participants were prepared with a single MXtrode array at F3 and one or two gelled Ag/AgCl electrodes next to the array. Skin preparation was otherwise identical to that used in the EEG recording sessions. In these sessions, we measured impedance with the Gamry potentiostat at a test frequency of 10 Hz. Preprocessing We preprocessed and analyzed all data in MATLAB 2019b (Mathworks, Inc., Natick, Massachusetts, USA) using EEGLAB (Delorme and Makeig 2004 ), ERPLAB (Lopez-Calderon and Luck 2014 ), and custom functions. We imported data into MATLAB using Intan data conversion tools (‘ MATLAB RHD file reader ’) and then converted the data into EEGLAB format using custom scripts. This import included an auxiliary EEG channel, which carried analog transistor–transistor logic signals generated by the stimulus presentation software, PsychoPy ® , to mark events (Peirce et al 2019 ). We used custom MATLAB scripts to decode event types depending on pulse width and repetition. Following conversion to EEGLAB structures, we individually ran data from each participant through a semi-automatic preprocessing pipeline. We bandpass filtered all EEG data (Ag/AgCl and MXtrode channels) from 1 to 35 Hz using a non-causal FIR filter with a transition bandwidth of 1 Hz and cutoff frequencies (−6 dB) of 0.5 and 35.5 Hz, respectively (function ‘ pop_eegfiltnew ’; (Widmann et al 2015 )). We selected the 1–35 Hz band to include frequencies from Delta through high Beta, which typically have a high enough signal-to-noise ratio (SNR) to be interpretable in standard-quality recordings. We then automatically iteratively rejected MXtrode channels (function ‘ pop_clean_rawdata ’ version 2.7, using default parameters, channel rejection tool component only) with visual inspection after each round until no additional channels were rejected (supplementary figure 2). We did not include Ag/AgCl electrode data in this iterative rejection step because our analysis objective included testing the correlation between MXtrodes and Ag/AgCl electrodes, and the ‘ pop_clean_rawdata ’ function uses the correlation between nearby electrodes as a basis for rejections. Instead, we visually assessed Ag/AgCl electrodes for data quality. Next, we iteratively used the timeseries artifact rejection component of ‘ pop_clean_rawdata ’ on all channels (Ag/AgCl and MXtrodes) with default parameters, followed by visual inspection. Because we wanted to compare signals between electrode types with minimal preprocessing, we did not use the artifact subspace reconstruction feature of ‘ pop_clean_rawdata.’ We computed the channel and timeseries rejections described above based on broadband-filtered data (1–35 Hz). We then applied these channel and timeseries rejections to copies of the data narrowband filtered to Delta (1–4 Hz), Theta (5–7 Hz), Alpha (8–12 Hz), Beta (13–30 Hz), and unfiltered data (used in spectral analyses). This process created datasets representing different frequency bands for analysis but maintained identical channels and timeseries synchronization across all versions of filtering. Filter specifications are reported in supplementary table 2. Electrode selection for similarity metrics We placed the MXtrode arrays and Ag/AgCl electrodes at different scalp locations in our design (figure 1 ), which may have led to recording slightly different sources of brain activity (Michel and Murray 2012 ). One method for handling this potential confound is to compute a comparison between two trusted sensors to serve as a baseline for the expected differences in each metric (Lopez-Gordo et al 2014 ). We used a pair of Ag/AgCl electrodes for this baseline. We also compared signals between a MXtrode:Ag/AgCl pair to test inter-electrode type differences and several pairs of MXtrodes to test inter-MXtrode differences at various distances (figure 2 ). We selected six pairs of electrodes for analysis. We chose five of these pairs to replicate the densest spacing in a standard 10-5 array (approximately 2 cm; Oostenveld and Praamstra 2001 ) within and between electrode type. We chose the sixth pair, which compared two MXtrodes at 6 mm spacing, to explore how signal similarities and differences change at the extreme densities possible with the MXtrode array. No Ag/AgCl comparison pair could be formed at this density because Ag/AgCl electrodes cannot be placed much closer than 2 cm before bridging becomes difficult to avoid. To select electrodes for each of these comparisons, first, we selected the lowest impedance Ag/AgCl electrode, the ‘primary’ electrode, and labeled it ‘AG’. We compared AG to (a) the second Ag/AgCl electrode (‘AG2’) in cases where it was available ( n = 9), forming the ‘AG:AG2’ pair, (b) the MXtrode nearest to AG, which varied across participants (‘MX-Near’) forming the ‘AG:MX-Near’ pair, and (c) the MXtrode farthest from AG, which varied across participants (‘MX-Far’) comprising ‘AG:MX-Far’ pair. We compared the MX-Near electrode to the MX nearest to it (‘MX-Neighbor’), forming the ‘MX-Near:MX-Neighbor’ pair. Finally, we compared the MXtrodes at the corners of the array (the furthest possible distances within the array) to the top right ‘MX-Q1’ and bottom left ‘MX-Q3’ corner MXtrodes (forming the ‘MX-Q1:MX-Q3’ pair) and finally the top left ‘MX-Q2’ and bottom right ‘MX-Q4’ corner MXtrodes (forming the ‘MX-Q2:MX-Q4’ pair). Across arrays, the MXtrodes involved in each pair sometimes differed slightly due to channel rejection (supplementary figure 2). If we rejected the MXtrode that would normally have been used in any comparison for poor signal quality, we used the next closest MXtrode. We did not reject any Ag/AgCl electrodes for the retained participants. Timeseries analyses Using the electrode pairs described above, we first computed timeseries correlations for all narrowband and broadband data (MATLAB function ‘corr’). Correlation is insensitive to discontinuities because it ignores temporal ordering. Therefore, we computed correlations after artifact rejection but without epoching or rejection sections of data with discontinuities. Spectral and RMS analyses We next extracted epochs for spectral and RMS analysis. Fourier spectral analysis requires epochs of sufficient length to characterize the lowest frequency included in the transform; at least two cycles are required, and four are recommended (Luck 2014 ). Therefore, we chose to limit our spectral analyses to a lower-bound frequency of 2 Hz, which implies extraction of 2-second epochs to meet the recommended number of cycles. This epoch length achieves a good trade-off between the lowest analyzable frequency and the amount of data to be included in the analysis. Based on this rationale, we extracted all possible continuous non-overlapping 2-second epochs from each recording. Like the channel and timeseries rejections, epoch extraction was identical for all versions of filtering. These 2-second epoched datasets (‘regularly epoched data’; M = 100.87 epochs, SD = 46.63) were used as a basis for all further spectral and RMS analyses, except for permutation analysis. We computed spectral power on the unfiltered version of the regularly epoched data by running the EEGLAB function ‘spectopo’ on each two-second epoch with the parameters 50% hamming window overlap, a range of 2–35 Hz with 1 Hz frequency resolution. We additionally computed spectral coherence (MATLAB function ‘mscohere’ with the same parameters used for spectopo) between each electrode pair and RMS amplitude for each electrode included in the analysis. For all three metrics, we computed values within each epoch, averaged across epochs within participant, and then grand averaged across participants. ERP analyses We additionally computed ERPs time-locked to PVT probe events in broadband filtered data (1–35 Hz). The epochs were −200–1000 ms event-locked to PVT cues (‘cued epochs data’; M = 19.6 epochs per participant, SD = 8.89). We baselined the epochs from −200 to 0 ms. We extracted epochs for electrodes involved in the AG:AG2, AG:MX-Near, and MX-Near:MX-Neighbor in ERP analyses. We also added the MX-Corner1:MX-Corner2 comparison (comprising the MX-Q1:MX-Q3 and MX-Q2:MX-Q4 pairs when available). We combined these comparisons because both corner pairs had the same distance between the electrodes. We computed values within each epoch, averaged across epochs within participant, and then grand averaged across participants. Permutation analyses For timeseries correlation and spectral coherence metrics, we performed a permutation analysis on a broadband-filtered version of the cued epoch data. To ensure a baseline level of exchangeability in the data epochs, we used event-locked PVT-cue epochs. We permuted the epoch order of the AG and MX-Near electrode 1000 times within-participant. We then recalculated within-participant average timeseries correlation and spectral coherence within all three possible pairs of original and permuted data, AG:MX-Near (Permuted), AG:AG (Permuted), and MX-Near:MX-Near (Permuted). Effect of skin preparation and through-hair recordings We performed one additional experiment to evaluate the performance of the MXtrode arrays of the same fabrication described above in conditions where (1) there was no scalp preparation and (2) through diverse hair types. We collected data from four additional participants (2 female). The average age was 27.25 years (SD = 6.55). Participant demographics and hair characteristics are listed in table 1 . Sessions lasted approximately two hours. We recorded EEG using the same Intan RHD system in the same shielded room with the same settings. Participants performed two rounds of the same PVT task described above with impedance recorded before, between, and after the PVT runs (separated by approximately 45 min apiece). Impedance on 4 MXtrodes per array and all 6 Ag/AgCl electrodes was recorded on each electrode individually at three time points using the Gamry potentiostat. For each participant, we recorded three new conditions simultaneously: (a) frontal F4 location with full prep (impedance only due to hardware limitations per below and to reduce burdens to participants), (b) frontal F3 location with no prep, (c) Cz-centered location with prep, but through hair. See supplementary figure 3. For (a), we used the exact procedure explained in section 2.3 . However, we only applied this procedure to a single 4 × 4 MXtrode array on the F4 location instead of two arrays placed on F3 and F4. In all cases, two Ag/AgCl electrodes were placed lateral to the array. For (b), the location and procedure were identical to the procedure explained in section 2.3 above. However, we did not wipe the area with an alcohol wipe, exfoliate it, nor re-wet it with saline. In all cases, two Ag/AgCl electrodes were placed lateral to the array. For (c), we found the Cz location along the vertex of the participant’s hair. We then parted the participant’s hair along that location. Next, we wiped the area with an alcohol wipe, allowed it to dry, and rewet with saline. We then placed a 4 × 4 MXtrode array so that the four MXtrodes in the leftmost column were directly on the visible line of the scalp where we parted the hair. We placed two gelled Ag/AgCl electrodes immediately to the left and filled them with gel. We then stretched medical wrap over the array, around the head, under the chin, and around once more to ensure enough pressure for the MXtrodes to contact the scalp. See supplementary figure 3 for an illustration of the session set-up for the cranial vertex site. The (b) F3 no-preparation and (c) hairy Cz-centered arrays were connected to the Intan with the same ground and referencing as described above. The F4 array for condition (a) was not connected to the Intan due to limitations in our custom adapters. We only used it to record impedance values. EEG was preprocessed as described in 2.5. Electrode selection was as described in 2.6. For the vertex, however, we only collected data on 4 MXtrodes out of the 16-Mxtrode array in which we focused on achieving adequate scalp contact through the hair. This limited pairings to AG:MX-Near, AG:AG2, MX-Near:MX-Neighbor, and AG:MX-Far. The distance between AG:MX-Far was also slightly reduced due to the geometry. Timeseries, spectral, and RMS analyses were conducted as described in 2.7 and 2.8, respectively.
Results Impedance distribution on the MXtrode and Ag/AgCl electrodes We compared the area-normalized impedances of Ag/AgCl electrodes and MXtrodes (figure 3 ). Standard outlier detection led to 1/10 Ag/AgCl electrodes and 13/200 MXtrodes rejected from the data. The impedance on the Ag/AgCl electrodes was lower (median = 1.107 kΩ cm 2 , interquartile range = 1.000 KΩ cm 2 , n = 9 electrodes) than MXtrode impedance (median = 3.948 kΩ cm 2 , interquartile range = 6.298 KΩ cm 2 , n = 187 MXtrodes). The difference in impedance between the electrodes was significant (Mann–Whitney U = 262, n 1 = 187, n 2 = 9, p < .001). See supplementary figure 4 and supplementary table 3 for additional impedance data suggesting good MXtrode stability over time, including moderate drops in impedances over time, likely due to the influence of sweat (Murphy et al 2020b ). Timeseries-based metrics Metrics computed from timeseries data included RMS amplitude (table 2 ) and Spearman’s rank correlation (figure 4 and supplementary table 4). The broadband RMS amplitude was higher on all MXtrodes than on Ag/AgCl electrodes. Spearman’s rank correlations revealed high overall similarity within and between electrode types. Correlations were highest in the Alpha band and for the MX-Near:MX-Neighbor and AG:AG2 pairs. The Delta and Beta bands observed the lowest correlations, especially for the MX-Near:MX-Far and MX-Near:AG pairs. Correlations for all participants in all electrode pairings by frequency bands were significant ( p < .001). See supplementary table 5 for results reporting high split-half correlations between timeseries, supplementary figures 5 and 6 for timeseries correlations from no-preparation and hairy site electrodes, and supplementary tables 6 and 7 for RMS results from no-preparation and hairy site electrodes. Spectral metrics Metrics derived from spectral transformations included spectral power and spectral coherence (figure 5 ). The MX electrodes had higher spectral power by 1 dB or less than the AG electrode at all frequencies (figure 5 (A)). Spectral power was similar across electrodes otherwise, with the greatest absolute differences between about 1–10 Hz and 17–25 Hz. The expected Alpha-band power enhancement was present on all the electrode types. Spectral coherence was high overall and highest in the Alpha band for all pairs (figure 5 (B)). Closely spaced MXtrodes (MX-Near:MX-Neighbor) were slightly less coherent in lower frequencies but much more coherent in higher frequencies than the further-spaced Ag/AgCl electrode pair (AG:AG2). MXtrode pairs spaced similarly to Ag/AgCl electrode pairs (MX-Near:MX-Far pairs) had slightly lower coherence across all frequencies, especially in lower frequency bands. The split-half signal stability of spectral coherence was high and is reported in supplementary table 8. Qualitatively, the EEG timeseries of the Ag/AgCl electrodes and MXtrodes within each individual were highly similar (figure 5 (E)). Figure 6 shows the performance of the arrays through diverse hair types at site Cz. Overall, we recover similar 1/ f signals and evidence of modest group level but detectable alpha power elevations in 3 out of 4 individual participants (supplementary figure 7). Moreover, good qualitative correspondence between the timeseries of the Ag/AgCl electrodes and MXtrodes within each individual was again observed again in these data (figure 6 (E)). For spectral metrics in a no-preparation forehead site, see supplementary figure 8. ERP analysis ERPs of the cue event derived from 1 to 35 Hz data reveal high similarity within and between electrode types (figure 7 ). All electrodes exhibit clear P200 and N400 components. The largest absolute deviations in ERP amplitude in both hemispheres occurred between the AG and AG2 electrodes. Spearman’s rank correlations of all ERPs were significant ( p < .001). The AG:AG2 pair had the weakest correlation, and the MX-Near:MX-Neighbor pair had the strongest. Cued-epoch permutation analysis The correlation and spectral coherence between the non-permuted AG:MX-Near cued-epoch data was very high. The permuted AG:MX-Near cued-epoch data revealed that event-related signal content induced some broadband spectral coherence between electrodes. However, non-permuted correlation and spectral coherence between electrode pairs were always much higher than the permuted comparisons (table 3 ; figure 8 ).
Discussion In this study we quantified the similarity of scalp EEG signals recorded simultaneously from dry high-density MXtrode electrodes and gelled Ag/AgCl electrodes. Across all the metrics we computed, scalp EEG signals recorded on dry Mxtrode channels were highly similar to signals on gelled Ag/AgCl in the 1–35 Hz range. The most notable deviations in individual frequency bands were slightly lower correlation and spectral coherence between pairs involving MXtrodes in the Delta through Alpha-bands and much higher correlation and spectral coherence between the nearest MXtrode channels in the Beta-band. The signals collected on Mxtrode channels revealed Alpha-band power and ERP signatures of cortical origin. Participants tolerated the Mxtrode arrays well. No participants reported discomfort due to the array or wrapping, even when asked at the end of the session. Overall, our results support that MXtrodes record similar average and instantaneous spectral and timeseries information as Ag/AgCl electrodes and are suitable general replacements, including through the hair, when adequate scalp contact is achieved. We performed all analyses on both Ag/AgCl and Mxtrode timeseries so that Ag/AgCl results could serve as a baseline to compare Mxtrode signals. These comparisons revealed high similarity across the electrode types. We found that RMS amplitudes were slightly higher on dry MXtrodes compared to a nearby Ag/AgCl electrode. The power spectral density analysis clarified that this amplitude difference was a relatively uniform difference of <1 dB across the 1–35 Hz range (figure 5 (A)). This power difference had little impact on the parity of signals recorded across the electrode types because it was similar across all frequencies. In addition, we observed only slightly lower timeseries correlations and spectral coherence between MXtrodes and Ag/AgCl electrodes than between two Ag/AgCl electrodes. The magnitude of the timeseries correlation and spectral coherence for all electrode pairs remained at high levels for inter-electrode comparisons (figure 4 ) (Li et al 2020 ). Together, these findings suggest that MXtrodes and Ag/AgCl electrodes record qualitatively similar timeseries and spectral content when positioned at similar distances. We detected signatures of cortical origin in the spectral features of the Alpha-band on both electrode types. Specifically, we observed a peak in Alpha band spectral power (figure 5 (A)), a well-known signature of occipital brain sources (Smith et al 2017 ) on both electrode types. We also observed a peak in spectral coherence in the alpha band for all electrode pairs (figure 5 (B)). The fact that both spectral power and coherence peaked in the Alpha-band suggests that these results might be linked. The likely source of this link is phasic alpha bursting (Rusiniak et al 2018 ). Coherence measures the stability of the relative phase and amplitude between two timeseries and does not inherently encode frequency power information. However, amplitude covariation between timeseries has been linked to increased coherence (Lachaux et al 1999 ) and is a defining feature of alpha bursts. Therefore, we expect that our results encode the presence of alpha bursts as both enhanced average spectral power over the recording and enhanced spectral coherence due to the shifts in amplitude that bursts induce. These results emphasize that MXtrodes detect cortical signals similar to gelled Ag/AgCl electrodes. The preserved spectra and ability to detect individually variable alpha peaks (Bazanova and Vernon 2014 ) with MXtrodes through the hair when they are observed in Ag/AgCl recordings reveals that they could be deployed in full-head montages, such as in full EEG cap designs. Where we did observe differences in spectral coherence between electrode pairs, those differences were likely related to electrode geometry and spacing. First, we found increased low-frequency (Delta through Alpha-band) coherence on Ag/AgCl electrodes compared to MXtrodes. This difference existed across all electrode pairs that involved a Mxtrode, which strongly suggests an effect of electrode type. A likely explanation for this finding is that the larger diameter and broad gel base of the Ag/AgCl electrodes spatially integrate signals from a larger scalp area than the smaller dry contacts. A larger area spatially low-passes scalp information, which reduces the aliasing of higher frequency information into lower frequencies (Iivanainen et al 2020 ), effectively reducing noise. This noise reduction may have increased low-frequency coherence on gelled Ag/AgCl electrodes relative to the smaller MXtrodes. Lower EEG frequencies also have lower spatial frequency (Burgess and Gruzelier 1997 , Srinivasan et al 1998 ), meaning less information is lost over the larger Ag/AgCl spatial integration area. These results suggest that small contact-area electrode geometries, such as those we used in our Mxtrode arrays, are slightly less optimal for measuring low-frequency spectral information. However, even the extremely small 3 mm diameter electrodes we used resulted in only moderate decreases in low-frequency coherence. Additionally, it is possible to fabricate MXtrodes at larger diameters with minimal modification to current manufacturing processes (Driscoll et al 2021 ). Second, we observed that the nearest MXtrodes (the MX-Near:MX-Neighbor pair) had much higher spectral coherence and timeseries correlation than any other pair of electrodes in the Beta-band. Because we did not observe these increases on MXtrode pairs at larger distances, they likely reflect that the MX-Near:MX-Neighbor pair sampled high-frequency topographies more densely. High EEG frequency bands such as Beta have higher spatial-frequency scalp topographical features than lower frequencies (Burgess and Gruzelier 1997 , Srinivasan et al 1998 ). Fully characterizing higher spatial-frequency information requires denser sampling (Kuhnke et al 2018 ). Thus, our results suggest that high-density arrays may be especially useful for detecting the topography of high-frequency scalp signals. Although our arrays did not provide full-head coverage, we suggest that it is possible that our results would generalize across the scalp. Different scalp locations have different SNRs within frequency bands, which can influence correlation and coherence measurements. However, our frontal arrays were maximally distant from the brain’s strongest rhythm (occipital Alpha), and the intrinsic rhythm of the frontal region is a relatively weak Theta signal. Therefore, the frontal scalp is an appropriate initial location to examine. Importantly, not all scalp topographical information derives from brain sources. Environmental noise, equipment noise, and non-brain physiological artifacts can contaminate the scalp EEG signal. High-frequency bands (Beta and above) are particularly susceptible to contamination since their SNR over background 1/ f activity is already low (Muthukumaraswamy 2013 ). Our data cannot confirm that the enhanced high-frequency coherence we observed between nearby MXtrodes originates from brain sources rather than artifacts. However, we can draw several conclusions. First, the MX-Near:MX-Neighbor pair’s high Beta timeseries correlation and spectral coherence are necessarily driven by a shared signal rather than a signal unique to individual MXtrodes, so it is unlikely to represent manufacturing variability in MXtrodes. Furthermore, we observed that MXtrodes spaced at a similar distance as the AG:AG2 pair (∼2 cm) have very similar correlation and coherence to the AG:AG2 pair. The only standout difference is the elevated high-frequency correlation and coherence on closely spaced MXtrodes. These findings suggest that the enhanced high-frequency similarity in the MX-Near:MX-Neighbor pair is likely due to their closer spacing rather than systematic differences in signal content compared to Ag/AgCl electrodes. Thus, while we cannot rule out artifact sources in our data, our results demonstrate the potential for enhanced spatial resolution to reveal previously unmeasurable topographical features, provided that one can separate them from artifacts. It should be noted that potential scalp location effects in our analysis might have prevented a fully balanced comparison across electrode types. However, we expect that scalp location effects were small. Scalp signals in most frequency bands are quite similar at sensors spaced 2 cm apart (Srinivasan 2005 ). Any true scalp location-based differences would manifest most clearly in the Beta band because higher frequency topographies have EEG sources with both more real variation and weaker signals (Zelmann et al 2014 ). Indeed, in all comparisons, we measured the lowest coherence and correlation in the Beta band. This finding also emphasizes the information gained by the ultra-high density MXtrode array, which had a much smaller Beta band drop-off in these metrics. To further clarify whether the sensors included in our analysis recorded similar brain-sourced signals, we computed ERPs, which aggregate event-locked scalp activity and average out environmental noise (Luck 2014 ) thus restricting the analysis to only signals originating in the brain. The PVT cue event ERPs computed from MXtrodes and Ag/AgCl electrodes were qualitatively similar (figure 7 ). All ERPs exhibited clear P200 and N400 components. The P200-N400 complex is a set of frontal components commonly observed in response to visual stimuli and attentional demand (Kanske et al 2011 ). Furthermore, we observed the largest ERP amplitude differences between the AG:AG2 pair in either hemisphere. These findings suggest that the differences in ERP amplitude between other pairs of electrodes (i.e. MXtrodes) are smaller than what could be attributed to a 6 mm shift in electrode position. The potential scalp location effects could also cause the lower ERP correlation in the AG:AG2 condition compared to the similarly-spaced MX-Corner1:MX-Corner2 condition. Alternatively, this difference in correlation could reflect that the gel base of the Ag/AgCl electrodes have less standardized scalp contact areas than MXtrodes and, thus, slightly less similar broadband signal content. Our data cannot adjudicate between these possibilities. However, because it is unlikely that we recorded extremely similar ERPs in the MX-Corner1:MX-Corner2 comparison by chance, our results support at least the non-inferiority of MXtrodes relative to Ag/AgCl electrodes for recording ERPs. In event-related recordings, signals may derive similarity from both the standardized ‘average’ ERP brain response induced by the stimulus and the instantaneous activity unique to each trial. Our permutation analysis tested how much of the similarity between MXtrodes and Ag/AgCl electrodes in event-related recordings (the PVT cue event) was due to average versus instantaneous activity. The correlation and coherence of the permuted epochs were much lower than that of the non-permuted epochs across all frequencies (table 3 ; figure 8 ). Therefore, instantaneous activity was responsible for most AG:MX-Near correlation and coherence as opposed to average event-related induced activity. This finding suggests that MXtrodes recorded similar average brain responses to Ag/AgCl electrodes and similar instantaneous activity. Researchers can therefore interpret MXtrode timeseries similarly to Ag/AgCl timeseries. Another common artifact class of concern in high-density arrays is bridging. Bridging is when a conductive medium links two electrodes, causing their signals to become identical and non-comparable to the rest of the array. Dry electrodes experience this issue less commonly than gelled electrodes since there is no gel to smear between electrodes accidentally but sweat or remaining saline after scalp rewetting could still have bridged the MXtrodes in our arrays. However, bridged signals are nearly identical across all frequency bands (Alschuler et al 2014 ). We did not observe identical signals on any electrode pairs, so bridging was not a large concern, even with our quick scalp preparation method. Overall, the strong similarity between dry MXtrodes and gelled Ag/AgCl electrodes, including through diverse hair types, across the metrics we computed is striking, especially considering that MXtrodes had higher average area-normalized impedance than gelled Ag/AgCl electrodes in a separate test (figure 3 and supplementary table 3). The larger variance we observed in MXtrode impedance may be partially due to fabrication variability or participant-wise skin properties, which have a greater impact on dry electrodes than on gelled electrodes (Li et al 2017 ). Additionally, the impedance of dry electrodes tends to drift more over time (Krachunov and Casson 2016 ). Multi-material strategies could improve the performance of MXtrodes. Pure-MXene leads have advantages in scalability, but future work should explore whether coating the electrode contact area with conductive polymers such as PEDOT can improve impedance (Donahue et al 2020 ). The MXtrode arrays sample the scalp at the highest two-dimensional density we know of in the EEG literature. Although this is a significant technical innovation, there is debate about whether increased scalp density is valuable in practice. Some researchers have argued that existing commercial EEG sensor arrays are already dense enough to extract all meaningful information from scalp voltage topographies (Nunez and Srinivasan 2006 ). The skull and scalp spatially low pass brain potentials and a high number of electrodes may oversample the resulting scalp voltage topography (i.e. exceed the spatial Nyquist rate of the scalp; (Srinivasan et al 1998 , Nunez and Srinivasan 2006 )). However, methods for estimating the scalp spatial Nyquist may be inaccurate because they have typically assumed idealized physical models which may not hold in reality. Additionally, the reconstruction of brain sources may benefit from densities of up to thousands of electrodes (Grover and Venkatesh 2017 ). Therefore, high-density arrays may have utility for a variety of purposes. Experimental evidence may be necessary because theoretical analyses disagree about the potential uses of ultra-high-density EEG arrays. Experimental data already support that densities beyond 256 electrodes are practically useful in a variety of EEG subdomains. These include decoding SSVEP (Robinson et al 2017 ), classifying brain states (Petrov et al 2014 ), and recording from neonates (Odabaee et al 2013 ). Additional density could potentially benefit techniques that have already demonstrated improvement with electrode counts up to 96, 128, or 256 electrodes, including localizing and monitoring epilepsy (Lantz et al 2003 , Nemtsas et al 2017 ) and detecting subcortical EEG sources (Seeber et al 2019 ). The availability of dry, passive, ultra-high-density arrays may help to accelerate discoveries in this area. Limitations The exploratory nature of the device fabrication in this study may have resulted in more variability than would be present in bulk fabrication, which could have reduced the similarity between MXtrode signals. Hand-inking the MXtrode arrays likely contributed the most variability to our process. In the future, automated methods like inkjet printing could greatly reduce this variability. Our design did not counterbalance the locations of the electrodes. However, a scalp location effect would not likely lead to our observed results. Electrode spacing, geometry, and potentially the material properties of Ti 3 C 2 T x MXene are more likely sources of signal variability. Some MXtrode channels were rejected due to high impedance, which may also have been caused by fabrication variability. These rejections led to minor inconsistencies in which MXtrodes were used to form pairs, but the high density of the MXtrode arrays likely minimized the impact of these effects. Our channel rejection strategy occasionally identified contiguous sections of some arrays, which we inferred as likely physical disconnection of the array from the scalp due to inadequate head wrapping. Future application strategies would benefit from methods for generating more uniform (though not more intense) pressure across MXtrode arrays. In addition, the MXtrodes we used had a smaller diameter than the Ag/AgCl electrodes used for comparison, making it challenging to discern whether signal differences were due to geometry or material properties. Future studies could clarify this by comparing similar geometries across different electrode types. Additionally, we recorded from only a limited number of scalp locations. In the future, full-head recordings will be important to confirm the generalizability of our findings. Finally, our analyses of recordings from a site measuring EEG through diverse hair types suggested that the arrays reported in this manuscript recover 1/ f spectral power distributions and MXtrode timeseries are well-correlated with synchronous Ag/AgCl measurements, which indicate that they are sensitive neural signals across diverse hair types. Adequate skin preparation may be required to record optimal EEG signals from dry MXtrodes, and additional manufactured geometries could be further optimized for adequate scalp contact through the hair and to further minimize impedance.
Conclusions We observed that the differences in signal between dry MXtrodes and Ag/AgCl electrodes were mostly similar or smaller than the difference between two Ag/AgCl electrodes at the same distance on the scalp across metrics comparing instantaneous activity, average event-locked signals, amplitude, and spectral properties. Therefore, researchers can use dry MXtrodes to record non-inferior signals to those obtained using gelled Ag/AgCl electrodes for the same research purposes, including through diverse hair types, if adequate skin contact is maintained. The low-profile MXene array used to record EEG in this study requires minimal preparation and no gel, which could significantly speed up and improve the tolerability of basic research applications and the development of new BCI applications. In addition, we showed that MXtrode arrays can record signals independently, without bridging, at a spatial density four times higher than that achievable with gelled electrodes. This high density allowed us to capture more topographic information in the high-frequency (Beta) range than canonical low-density montages. Ultra-high-density montages, such as those made possible by MXtrodes, may enable more accurate source reconstruction and have potential applications in neonatal and epileptic populations. MXtrodes represent a significant advance that may simplify basic EEG research and open new domains for EEG applications.
Authors to whom any correspondence should be addressed. Abstract Objective. To evaluate the signal quality of dry MXene-based electrode arrays (also termed ‘MXtrodes’) for electroencephalographic (EEG) recordings where gelled Ag/AgCl electrodes are a standard. Approach. We placed 4 × 4 MXtrode arrays and gelled Ag/AgCl electrodes on different scalp locations. The scalp was cleaned with alcohol and rewetted with saline before application. We recorded from both electrode types simultaneously while participants performed a vigilance task. Main results. The root mean squared amplitude of MXtrodes was slightly higher than that of Ag/AgCl electrodes (.24–1.94 uV). Most MXtrode pairs had slightly lower broadband spectral coherence (.05 to .1 dB) and Delta- and Theta-band timeseries correlation (.05 to .1 units) compared to the Ag/AgCl pair ( p < .001). However, the magnitude of correlation and coherence was high across both electrode types. Beta-band timeseries correlation and spectral coherence were higher between neighboring MXtrodes in the array (.81 to .84 units) than between any other pair (.70 to .75 units). This result suggests the close spacing of the nearest MXtrodes (3 mm) more densely sampled high spatial-frequency topographies. Event-related potentials were more similar between MXtrodes ( ρ ⩾ .95) than equally spaced Ag/AgCl electrodes ( ρ ⩽ .77, p < .001). Dry MXtrode impedance ( x̄ = 5.15 KΩ cm 2 ) was higher and more variable than gelled Ag/AgCl electrodes ( x̄ = 1.21 KΩ cm 2 , p < .001). EEG was also recorded on the scalp across diverse hair types. Significance. Dry MXene-based electrodes record EEG at a quality comparable to conventional gelled Ag/AgCl while requiring minimal scalp preparation and no gel. MXtrodes can record independent signals at a spatial density four times higher than conventional electrodes, including through hair, thus opening novel opportunities for research and clinical applications that could benefit from dry and higher-density configurations.
Acknowledgments This work was supported by the National Institutes of Health (Award No. R01NS121219-01 to F V and J D M), philanthropic donations from Starfish Neuroscience, Inc. (to F V and J D M), and the Penn Center for Health, Devices, and Technologies (F V). Data availability statement The data that support the findings of this study are openly available at the following URL/DOI: https://doi.org/10.6084/m9.figshare.c.6696429.v2 . Ethics statement The Drexel University Institutional Review Board approved the study under Protocol #1904007140. All participants gave written informed consent to participate in the study. This research was conducted in accordance with the principles embodied in the Declaration of Helsinki and local statutory requirements. CRediT author statement Brian Erickson : Supervision, Conceptualization, Methodology, Software, Data Curation, Writing—Original Draft, Visualization; Ryan Rich : Conceptualization, Methodology, Software, Validation, Formal Analysis, Investigation, Data Curation, Writing—Review and Editing, Visualization, Project Admin; Sneha Shankar : Methodology, Validation, Investigation, Resources, Data Curation, Writing—Review & Editing; Brian Kim : Investigation, Software, Data Curation, Writing—Review & Editing, Visualization; Nicolette Driscoll : Methodology, Validation, Investigation, Resources, Data Curation, Writing—Review & Editing; Georgios Mentzelopoulos : Investigation, Writing—Review & Editing; Guadalupe Fernandez-Nuñez : Investigation; Flavia Vitale : Supervision, Conceptualization, Methods, Resources, Writing—Review & Editing Project administration, Funding acquisition; John Medaglia : Supervision, Conceptualization, Methods, Resources, Writing—Review & Editing, Project administration, Funding acquisition. Conflict of interest This study received funding from Starfish Neuroscience, Inc. (to F V and J D M). Starfish Neuroscience was not involved in the study design, collection, analysis, interpretation of data, or the writing of this article or the decision to submit it for publication. F V and N D are co-inventors on the two following pending international patent applications related to MXene bioelectronics: PCT/US2020/055147 and PCT/US2018/051084. The remaining authors do not have any potential conflict of interest.
CC BY
no
2024-01-16 23:43:50
J Neural Eng. 2024 Feb 1; 21(1):016005
oa_package/ad/9e/PMC10788783.tar.gz
PMC10788789
38226221
Introduction Hernias are among the most prevalent abdominal wall defects that require surgery and include the protrusion of an organ through a weak spot in the cavity of the abdominal wall [ 1 ]. Hernias take many different forms according to its position in the body. It can show up in the femoral canal, epigastrium, umbilicus, and inguinal cavities. The most frequently occurring types of hernia are the inguinal (70–75 %), femoral (6–17 %), epigastric (8.6 %), umbilical (3–8.5 %) and incisional (6.2 %), with other kinds such as spigelian [ 2 , 3 ]. They typically cause pain by creating a noticeable protrusion on the skin and sometimes also involve life-threatening complications for patients. More than 20 million hernia repair surgeries are thought to be performed annually around the globe [ 4 ]. Due to a number of risk factors, including obesity and prior abdominal surgery, the number of procedures has been rising and is expected to continue to do so [ 5 ]. There has been a significant increase in the use of meshes for hernia repair. In this context, surgical meshes are crucial medical equipment that aid in repairing the injured tissue. In order to stabilize the abdominal wall and provide long-term resistance, surgical prostheses for hernia repair are designed to strengthen and replace tissue abnormalities [ 6 ]. One of the biomaterials most frequently used in these meshes consists of a variety of natural and synthetic polymers with various structures (reticular, laminar, and hybrid) and characteristics (pore size, filament distribution) [ 1 ], polypropylene (PP) being the preferred material to fabricate meshes for these repairs [ 1 ]. PP implants reduce the risk of recurrence and post-operative pain [ 7 ], although there are many other risks associated with them, like nerve entrapment, mesh erosion, mesh exposure, pain and infection [ 8 ]. The high incidence of infection due to these surgical processes has repercussions on factors such as economic aspects, the social burden, hospital re-admissions, re-operations, hernia recurrence, impaired quality of life and plaintiff litigation [ 9 , 10 ]. The infection rate following an open inguinal hernioplasty in a clean field varies between 2.4 % and 4.9 % [ 11 ]. These percentages increase if the surgery is clean-contaminated, contaminated, or dirty. It also increases in the case of patients with risk factors such as diabetes, steroid use, obesity, recurrent hernia, etc. In cases of abdominal wall reconstruction due to incisional hernia, surgical site infection rates are close to 33 % [ 12 , 13 ]. By the year 2050, the World Health Organization (WHO) predicts that antibiotic resistance will surpass other significant diseases like cancer as one of the top causes of death [ 14 ], so that alternative antimicrobial agents such as quaternary ammonium compounds are being proposed to combat microbial resistance [ 15 ]. Meshes loaded with antibiotics have been developed and even though these meshes may have antibacterial activity against bacteria such as Escherichia coli and MRSA, they tend to be expensive and do not induce cell proliferation [ 16 ], the latter property being important to improve healing after hernia repair surgery [ 17 ]. As some recent studies concluded that benzalkonium chloride (BAK), a quaternary ammonium compound, has good antibacterial properties [ 18 , 19 ] we therefore hypothesized that antimicrobial meshes treated with different dissolutions of benzalkonium chloride would be biocompatible (non-cytotoxic at 72 h) and enhance proliferation activity, using fibroblasts as the in vitro model.
Material and methods Materials The commercial ultralight macroporous Herniamesh® mesh (REF. H60611, minimum porosity of 410 μm, average of 1800 μm, and maximum of 2270 μm, with a filament diameter of 120 μm and 48 g/m 2 , Chivasso TO, Italia) was used for the study. All the materials were stored and handled according to the manufacturer's instructions. The mesh was cut into 1 × 1 cm fragments, which were dried at 37 °C for 24 h. A commercial 0.1 % w/v BAK solution (Montplet, Barcelona, Spain) was used diluted with absolute ethanol/distilled water (70/30 v/v ) at 40 %, 30 %, 20 %, 10 %, 7.5 %, 5 %, 2.5 %, 1 %, 0.5 %, 0.1 % and 0.05 % v/v . The mesh fragments were treated with the different dilutions by the dip coating method, which is a straightforward, inexpensive, dependable and reproducible technique in which a thin BAK layer is physically adsorbed onto the PP surface [ 20 ] for 30 min at 25 °C (see Fig. 1 ). These samples, hereafter referred to as 40_BAK, 30_BAK, 20_BAK, 10_BAK, 7.5_BAK, 5_BAK, 2.5_BAK, 1_BAK, 0.5_BAK, 0.1_BAK, 0.05_BAK samples. Six mesh fragments (n = 6) of the concentrations were subjected to the same dip-coating treatment. Six mesh fragments were treated with absolute ethanol/distilled water solution (70/30 % v/v ) without BAK for 30 min at 25 °C as a control and treated with solvent (S mesh). Six untreated fragments were the untreated control material (U mesh). All the treated and untreated mesh fragments were dried at 37 °C for 24 h and sterilized under UV radiation for 1 h per disk side. Nuclear magnetic resonance was previously performed to characterize the BAK used in the study by means of a BRUKER AVIIIHD 800 MHz (Bruker BioSpin AG, Fälladen, Switzerland) equipped with a 5 mm cryogenic CP-TCI [ 18 ]. Toxicological Study . The mesh fragments were sterilized for 1 h on each side under an ultraviolet light source. Each biomaterial was evenly distributed in a 6-well plate containing DMEM (Biowest SAS, France) without Fetal Bovine Serum (FBS), following the recommendations of the ISO-10993 standard and the recommended volume ratio of 0.1 g/mL was chosen for irregular porous low-density materials such as textiles. The extracts were obtained and used right away for the toxicological assay after the mesh fragments were incubated in a humidified 5 % CO2/95 % ambient atmosphere for 72 h at 37 °C. A L929 mouse fibroblast cell line was used in the cytotoxicity tests in which the cells were incubated at 37 °C and 5 % CO2 in a DMEM solution with 10 % FBS, 100 units/mL of penicillin (Lonza, Belgium), and 100 mg/mL of streptomycin (HyClone, GE Healthcare Life Sciences). The MTT assay was used to determine the impact of the mesh fragment treatment on cell viability. The fibroblast cells were seeded at a density of 10 4 cells per well in a 96-well plate. The media in the wells were replaced with 100 μL of mesh fragment extracts following a 24-h incubation period at 37 °C. The medium was also changed with 100 μL of the same medium used to create the mesh fragment extracts (positive control) and 100 μL of a 1000 μM zinc chloride (97.0 %, Sigma-Aldrich) solution as a negative control as this concentration is highly toxic [ 21 ]. 5 mg/mL MTT was used to incubate the cells in each well for 3 h. As a result, 100 μL of dimethyl sulfoxide (Sigma-Aldrich) at room temperature was added to the formazan crystals, after which the absorbance at 550 nm was measured by a microplate reader (Varioskan, Thermo Fisher). Proliferation study The mesh fragments were sterilized for 1 h on each side under an ultraviolet light source. No toxic concentrations were studied. The different biomaterials were evenly distributed across the surface of a 6-well plate containing DMEM (Biowest SAS, France) without Fetal Bovine Serum (FBS), following the recommendations of the ISO-10993 standard, by which a volume ratio of 0.1 g/mL was chosen for irregular porous low-density materials. The extracts were obtained and immediately used for the toxicological assay after the mesh fragments had been incubated in a humidified 5 % CO2/95 % ambient atmosphere for 72 h at 37 °C. A L929 fibroblast cell line was used. The cells were incubated at 37 °C and 5 % CO2 in DMEM containing 0.5 % FBS, 100 units/mL of penicillin (Lonza, Belgium), and 100 mg/mL of streptomycin (HyClone, GE Healthcare Life Sciences). The MTT assay was used to assess the effect of the mesh fragment treatment on cell proliferation. 5 × 10 3 fibroblast cells were seeded in each well in a 96-well plate. 100 μL of mesh fragment extracts were added to the medium in the wells after 24 h of incubation at 37 °C. 100 μL of the medium used to create the mesh fragment extracts was added as a positive control, and 100 μL of a highly toxic 1000 μM zinc chloride (97.0 %, Sigma-Aldrich) solution was added as a negative control [ 21 ]. 100 μL of 15 ng/mL epidermal growth factor (EGF) was added as a proliferation control. Cell incubation was carried out with 5 mg/mL MTT in each well for 3 h after the mesh fragments had been incubated for 72 h at 37 °C in a humidified 5 % CO 2 /95 % air atmosphere. As a result, 100 μL of dimethyl sulfoxide (Sigma-Aldrich) at room temperature was added to the formazan crystals and a microplate reader (Varioskan, Thermo Fisher) was used to measure the absorbance at 550 nm. Mesh characterization The amount of BAK physically adsorbed to the PP meshes was determined gravimetrically. Only the meshes that showed biocompatible results at 72 h were characterized. The treated meshes were dried in vacuo to constant weight and weighted after the dip-coating process to determine the amount of BAK adsorbed. Morphology The morphology of untreated mesh (U Mesh) and those treated with 0.1 and 0.05 % BAK (0.1_BAK and 0.05_BAK, respectively) was examined on a Leica DM750 optical microscope and Leica ICC50 W images were taken at 4x and 10× magnifications. The images were processed on Leica Application Suite X software (Leica, Madrid, Spain). Macroscopic photographs of the mesh fragments were also obtained by a 24 MP Huawei camera at an opening of f/1.8. Untreated and treated mesh morphology was examined on a field emission scanning electron microscopy (FESEM, Zeiss Ultra 55 Model) with energy-disperse x-ray spectroscopy at an accelerating voltage of 3 kV and magnification of 27, also at a voltage of 10 kV for the elemental analysis. The samples were then sputter-coated with gold. Statistical analysis GraphPad Prism 6 software was used for the one-way ANOVA analysis of variance for multiple value comparisons and Tukey's posthoc test (*p > 0.05, ***p > 0.001).
Results Toxicological Study The results of the cytotoxicity tests performed on the extracts in the presence of fibroblast are shown in Fig. 2 . The extracts of 1_BAK, 0.5_BAK, 0.1_BAK, 0.05_BAK samples showed no statistically significant differences in cell viability (%) from the positive control, although the 2.5_BAK had significant differences with cell viability (%), although its cell viability was over 70 %, indicating non-cytotoxicity. The high concentrations of the extracts (40_BAK, 30_BAK, 20_BAK, 10_BAK, 7.5_BAK and 5_BAK) showed statistically significant differences in % of cell viability (lower than 70 %), indicating that the samples were cytotoxic. Proliferation study The proliferative activity of mesh fragments with BAK in the fibroblast cell line was studied using non-cytotoxic concentrations (2.5_BAK, 1_BAK, 0.5_BAK, 0.1_BAK and 0.05_BAK) based on the results previously obtained from the cytotoxic assay ( Fig. 2 ) to avoid toxic effects by increasing exposure time to 72 h ( Fig. 3 ). The results at 72 h showed a statistically significant increase in cell growth in the 0.1_BAK and 0.05_BAK concentrations, while the 2.5_BAK mesh was non-biocompatible after 72 h. Mesh characterization The biocompatible amounts of BAK adhered to the PP mesh surface determined gravimetrically are shown in Table 1 . The physically adsorbed amounts of BAK on the surface of the mesh filaments are so low that the mesh morphology hardly changes and retains its porous structure ( Fig. 4 ). The pre- and post-treated meshes with 0.1 and 0.05 % BAK were also analyzed by field emission electron scanning microscopy ( Fig. 5 ). BAK absorption was shown to have no effect on the original mesh morphology (U Mesh). Analysis of the filaments of these three types by EDS showed the presence of nitrogen, i.e. the presence of BAK only on the surface of 0.01_BAK and 0.05_BAK, as expected ( Fig. 5 ).
Discussion There are potential complications related to the placement of meshes and patients may, in some instances and in some countries where this is more common, such as the USA or UK, file claims [ 22 , 23 ]. However, without mesh placement, hernia recurrence rates are extremely high [ 1 ]. The only controversy lies in whether meshes should be placed in contaminated or dirty fields. The latest publications support their placement and also indicate that synthetic meshes (PP) provide the best results in these cases [ 24 ]. The most frequently used mesh worldwide, due to its characteristics, price, infection resistance, tissue integration, etc., is the PP mesh [ 1 ]. The only location where PP meshes should not be placed is intraperitoneal (IPOM). In this study, comercial PP meshes were treated with 0.1 % w/v BAK solution diluted with absolute ethanol/distilled water (70/30 v/v ) at concentrations ranging from 40 to 0.05 % v/v (40_BAK, 30_BAK, 20_BAK, 10_BAK, 7.5_BAK, 5_BAK, 2.5_BAK, 1_BAK, 0.5_BAK, 0.1_BAK, 0.05_BAK samples) by the dip coating method [ 20 ] ( Fig. 1 ). However, only the extracts of the 2.5_BAK, 1_BAK, 0.5_BAK, 0.1_BAK, 0.05_BAK samples showed non-cytotoxic effect on L929 mouse fibroblast cell line ( Fig. 2 ). BAK is a well-know compound with antimicrobial properties [ 18 , 19 ], although it can be toxic at high concentrations [ 25 , 26 ]. The proliferative activity of the antimicrobial meshes at 72 h was studied in the fibroblast cell line using the non-cytotoxic concentrations ( Fig. 3 ). The results showed that the 0.1_BAK and 0.05_BAK meshes induced fibroblast proliferation. However, the 2.5_BAK mesh showed to be toxic after 72 h. Table 1 shows that the biocompatible amounts of BAK adhered to the PP mesh surface (determined gravimetrically) range from 1.05 ± 0.39 to 5.43 ± 1.91 % w/w . These physically adsorbed amounts of BAK are so low that the macroscopic, optical microscopy and FESEM images of the meshes show that the polymer structures keep their porous morphology ( Fig. 4 , Fig. 5 ). The presence of BAK in these advanced meshes was demonstrated by FESEM-EDS ( Fig. 5 ). Thus, the EDS results showed the presence of nitrogen atoms on the surface of 0.01_BAK and 0.05_BAK samples (1.13 and 0.29 wt %, respectively). However, the untreated Herniamesh® (U Mesh) showed no nitrogen content as expected. The expected effectiveness on antibacterial properties of these meshes are high because previous dip-coatings of the BAK compound on similar polymer surfaces showed strong antimicrobial activity against multidrug-resistant bacteria [ 18 , 19 ]. It is well-known that increasing biocompatibility of meshes for hernia repair and integration might be the main avenues to achieve better outcomes [ 27 ]. The typical progression of the healing process necessitates a flawless coordination of each stage, including hemostasis, inflammation, proliferation, and remodeling, along with the participation of every type of cell [ 28 ]. These next generation meshes (0.1_BAK and 0.05_BAK) showed high biocompatibility even after 72 h in fibroblasts. Furthermore, since they were able to increase fibroblast proliferation, these meshes containing antimicrobial benzalkonium chloride will integrate better into the tissues, which is assumed to improve tolerance to acute infection, thereby avoiding the need to remove infected meshes and reducing the possibility of chronic mesh infection (biofilm). This would represent a significant scientific advancement, as it would enhance hernia surgery outcomes, decrease morbidity, lower the likelihood of reoperations, and improve the quality of life for these patients. Costs would also be reduced.
Conclusions Polypropylene meshes for hernia repair reduce the risk of recurrence and post-operative pain. However, there are also many risks associated with its use, such as bacterial infections. The present advance provides a new and improved surgical mesh for hernia repair and includes antimicrobial activity and fibroblast proliferating activity. Meshes treated with various concentrations of antimicrobial BAK were studied. The low concentrations of BAK mesh (treated with 1, 0.5, 0.1 and 0.05 % v/v dilutions of a commercial BAK solution) proved to be biocompatible with fibroblasts. Treatment with 0.1 and 0.05 % BAK dilutions improved the proliferative activity of these cells. These results are promising due to the ability of these antimicrobial meshes to prevent infections while inducing fibroblast proliferation.
Hernia repair is one of the most frequently performed world-wide surgical procedures in which hernia meshes are becoming increasingly used. Polypropylene (PP) mesh implants reduce the risk of recurrence and post-operative pain, although many other risks are associated with it, such as bacterial infection. In this study we developed PP meshes coated with the well-known antimicrobial compound, benzalkonium chloride (BAK) by dip-coating. Several dilutions (40, 20, 30, 10, 7.5, 5, 2.5, 1, 0.5, 0.1 and 0.05 % v/v ) of commercial BAK solution (BAK diluted in 70 % ethyl alcohol at 0.1 % w/v ) were used to produce antimicrobial meshes with different amounts of BAK. The dip-coating treatment with low concentrations of BAK (1, 0.5, 0.1 and 0.05 % v/v dilutions) was found to have biocompatible results in fibroblast. The use of 0.1 and 0.05 % v/v dilutions (PP meshes with up to ∼2 % w/w of BAK) showed proliferative activity on fibroblast cells, indicating that these novel antimicrobial meshes show great promise for hernia repair due to their ability to prevent infections while inducing fibroblast proliferation. Graphical abstract Keywords
Author contributions Á.S-A., A.C.-V. and A.T.-M. performed the experiments; Á.S.-A. got funding, led the work and wrote the draft manuscript; Á.S-A., A.C.-V. and S·P–S edited and proof-read the manuscript. All the authors have read and agreed to the published version of the manuscript. Additional information No additional information is available for this paper. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:The findings of this study contributed to patent application AX220202EP to the OEPM Office, Madrid, Spain, with Á.S-A. as inventor. The remaining authors declare no competing interests.
Acknowledgements The authors would like to express their gratitude to the Fundación Universidad Católica de Valencia San Vicente Mártir and to the 10.13039/501100004837 Spanish Ministry of Science and Innovation for their financial support through Grant 2020-231-006UCV and PID2020-119333RB-I00/AEI/10.13039/501100011033, respectively. We also thank PANAVALE Suministros Médicos( https://www.panavale.com ) for providing the Herniamesh® meshes used in this study.
CC BY
no
2024-01-16 23:43:50
Heliyon. 2024 Jan 6; 10(1):e24237
oa_package/a0/4e/PMC10788789.tar.gz
PMC10788794
38199078
Introduction Microbubble generation finds diverse applications across fields such as medicine, pharmacology, material science, food industry, and interdisciplinary applications like sonochemistry, sonoluminescence, and acoustic microstreaming [1] , [2] , [3] , [4] , [5] , [6] , [7] . Various emerging technologies have been developed to generate microbubbles in liquid, employing different physical principles such as outer liquid flow, acoustic field, and electric field [6] , [8] . Among these methods, the acoustic method is one of the most critical methods due to its inherent advantages of simplicity and the ability to generate microbubbles precisely when and where they are required [9] , [10] , [11] , [12] . These properties hold immense value across different domains, ranging from medical applications [13] , [14] , [15] , [16] to the production of specialized chemicals, such as controlling blood clotting and detecting gastrointestinal bleeding [17] , [18] . One common phenomenon in the acoustic field is rectified diffusion of gas bubbles, which refers to the process in which a pulsating gas bubble grows due to a net mass inflow when exposed to sound excitation with an amplitude greater than 0.1 MPa [19] , [20] , [21] , [22] , [23] . This mass transfer process becomes significant when a liquid containing dissolved gas is subjected to a sufficiently intense sound field. Harvey [24] first reported on this mass transfer process, and subsequent researchers have provided theoretical explanations mainly based on changes in the bubble’s surface area. Hsieh [25] extended the understanding by considering the convection term and successfully predicted mass transfer through experimental comparisons. Eller [26] proposed a theory of nonlinear bubble pulsation using a thin-diffusion layer approximation, neglecting gas diffusion and separating the diffusion equation from the bubble motion equation, an approach widely adopted by other researchers [27] , [28] , [29] , [30] , [31] , [32] , [33] , [34] , [35] , [36] . Additionally, Zhang [35] , [36] discussed the rectified mass diffusion of non-Newtonian fluids, such as viscoelastic fluids found in various organisms. Recently, Smith and Wang introduced an exceptional model for bubble growth in liquid through rectified diffusion [37] , [38] . Their model offers a straightforward yet precise solution for gas behavior in liquid, enabling accurate predictions of bubble growth over millions of oscillation cycles [37] , [38] . Notably, their findings highlight the significant influence of shell and area effects on bubble growth in liquid with bulk surfactant concentrations below 2.4 mM, underscoring the importance of surface tension in rectified diffusion for aqueous surfactant solutions [39] . For further theoretical studies on this mass transfer process, readers are recommended to refer to Fyrillas’s work [33] , [34] , [40] . Significant advancements in theoretical research over the past few decades have led to the development of various acoustic devices for applications in medicine and chemistry [4] , [6] , [23] , [41] . However, these devices face a common and challenging limitation: it is challenging to generate controllable-sized microbubbles in fluid, which is crucial for the field of medical imaging, biomedical, environmental, and chemical reactions. For example, in green biorefinery, precise control over bubble size is desired for targeted extraction of bioactive compounds [42] . Furthermore, the use of contrast agents in microbubbles for treating blood–brain barrier disruption is hindered by limitations in the controllable bubble size [43] . These fields all require quantitative control of bubble size in liquids to meet specific application demands. Recent studies have shown promising results by employing dual-frequency external acoustic excitation, effectively dividing the range of bubble growth into two smaller ranges [22] , [35] , [21] , [44] , [45] , [46] . In addition, dual-frequency sonication can effectively suppress chaotic bubble oscillations [47] and reduce the threshold for inertial cavitation, thereby enhancing power efficiency [48] . Considering the vast number of parameter combinations involved [49] , [50] , researchers have developed GPU-based methods to investigate the dynamics of multi-frequency bubbles. These findings collectively underscore the advantages of employing multi-frequency acoustic excitation in diverse applications and offer a pathway for optimizing ultrasound stimulation to induce inertial cavitation. Both works reaffirm the benefit of using multi-frequency acoustic excitation for various applications and provide a route for optimizing ultrasound excitations for initiating inertial cavitation. Therefore, exploring the increase in the number of acoustic frequencies emerges as an intelligent and highly promising approach to achieving directed and precise control over bubble size in liquid systems. However, the specific theory and method for achieving rational control of bubble sizes through modulation of parameters in the external acoustic field remain unknown, impeding the development of strategies aiming to control the bubble’s mass transfer process. To overcome this limitation, our work contributes to understanding the mass transfer process under multifrequency acoustic excitation, primarily focusing on the theoretical method. Through theoretical and numerical calculations, we investigated how three critical parameters—frequency, pressure amplitude, and amplitude ratio—affect the growth and behavior of microbubbles in liquid under acoustic excitations. By uncovering the underlying mechanisms behind the mass transfer phenomenon, our study provides valuable insights into effectively utilizing multifrequency acoustic stimulation for precise control and enhancement of bubble-related processes. The implications of our findings extend to various fields, including medicine, pharmacology, material science, and the food industry.
Theoretical method To derive the theoretical equations, we began by assuming that the external acoustic excitations, denoted as , could be expressed as a multifrequency acoustic signal consisting of three frequencies with varying amplitudes Here, represents the ambient pressure and ( ) represents the amplitudes of each external acoustic excitation with angular frequencies of , and , respectively. The schematic diagram of the external acoustic excitations is shown in Fig. 1 . Assuming the working fluid is a Newtonian fluid, we adopt Keller’s equation [51] to take into consideration the liquid’s compressibility and viscosity. The bubble motion equation is where Here, the over dot is the time derivative, is the instantaneous bubble radius, is the sound speed in liquid, is the liquid density, is time, is the instantaneous pressure at the gas side, is the equilibrium bubble radius, is the surface tension, is the polytropic exponent, is the liquid viscosity. The acoustic field has three different frequencies, which means the acoustic field with triple frequencies. It should be noted that our model does not consider variations in surface tension, which limits its applicability to non-Newtonian fluids and the dynamics of coated bubbles in liquid [39] . Additionally, our model only incorporates the linear approximation of bubble oscillation and does not account for highly nonlinear phenomena, including sub-harmonics and bifurcation [52] , [53] , [54] . Moreover, we assume that the bubble’s amplitude is small, resulting in oscillations within the spherical regime [55] , [56] , [57] . The diffusion equation follows Fick’s law and considers the gas that is dissolved in the liquid. The gas concentration in liquid can be written as where is the velocity of the liquid at one point; is the diffusion constant. Considering the initial and boundary conditions [58] , Here, is the gas’s initial concentration and is the gas concentration in the liquid. is controlled by Henry’s law, which suggests that it is directly proportional to the partial pressure of the gas. Specifically, and , where is the saturation concentration, is the Henry constant. The first term of Eq. (5), i.e., the convective term, represents the transient change of concentration of the gas. We can neglect it due to the slow movement and the diffusion equation can be simplified as . Hence, the bubble motion and mass transfer of Eq. (5) could be uncoupled. Combining Eq. (1) to Eq. (8), we can obtain the bubble growth rate. It can be expressed as [1] , [3] , [31] Here, is the universal gas constant, is the ambient temperature in the liquid, represents the time average. The solution of Eq. (9) is and where , and , and are Here, we ignore initial conditions and the solution of the homogeneous equation’s effect on gas bubble motion. The time averages of are determined (considering solutions up to the second order) as By combining Eqs. (17)–(19) and Eq. (9), we can obtain the bubble growth rate. To establish a clear relationship between the pressure amplitudes of the acoustic excitation with three different frequencies and the threshold of acoustic pressure amplitude of mass diffusion , we assume that . Therefore, combining with Eqs. (5)–(7) and setting , we can obtain Therefore, as shown in Fig. 2 , if the acoustic pressure amplitude exceeds the threshold value of , the bubble will grow gradually. However, if the acoustic pressure amplitude is lower than , the bubble will shrink and eventually collapse.
Results and discussions The frequency values considered in our discussion are in the megahertz range, which is commonly used in various fields, such as the scattering cross-section of acoustic bubbles and ultrasonic wave propagation [4] , [6] . Specifically, we considered two different ratios: and , with /s. To quantify the pressure amplitudes, we defined as the ratio of pressure amplitude, where as the ratio of pressure amplitude to , and as the ratio of pressure amplitude to . Therefore, the pressure amplitudes have a ratio of , and the total input pressure (equivalent pressure) is given by . To maintain a constant value for throughout our discussion, we keep constant for different values of and . This means that the total input power remains constant. We consider the case of air bubbles in water, and the constants used in our numerical calculation are Pa, kg/m 3 , m/s, Pa·s, mN/m, m 2 /s, J/mol/K, K, . Bubble growth region To begin, we investigate how the amplitude and frequency of multifrequency acoustic excitation impact the mass transfer process. To simplify our analysis, we assume that the amplitudes of all frequencies in the multifrequency excitation are equal to , and that the total power of the excitation remains constant for different values of and . Accordingly, the pressure amplitudes of the three acoustic excitations are given by: ; ; . The total threshold pressure amplitude and the pressure amplitudes of three acoustic excitations are denoted as , , , , respectively, and are related by the equation . Therefore, indicates that the bubble will grow until when . To verify the accuracy of our theory, we compare the changes in pressure threshold and corresponding first-order frequency (i.e., and equal to zero) threshold of our model at the second-order frequency (either or equal to zero), as shown in Fig. 3 a. We observed that the threshold pressure values ( ) under single and corresponding dual-frequency acoustic excitations intersected at one point on the value line, and the maximum pressure threshold value was observed for the dual-frequency acoustic excitations. For example, the value of under single frequency ( or ) and dual frequencies acoustic excitations ( ) intersected at the common point , which is consistent with the previous research [46] . However, unlike the situation where they share the same intersection points, the value of under multifrequency excitation did not intersect on the common intersect points with both single and dual frequencies ( Fig. 3 b, red line). Furthermore, its threshold pressure was higher than both single and dual-frequency. We distinguished the maximum value of the dual-frequency, i.e., , and in Fig. 3 a, with the maximum of the triple frequency, and in Fig. 3 b. We then compare the pressure threshold of single and triple frequencies in Fig. 3 c, which shows three regions (A, B, and C) of different based on the value of and to enable a clear comparison between these lines. A dotted line labeled as is defined, which intersects with at the points and , where the subscript , represents bubbles under acoustic excitations in the region can grow from to its final size . In region A, where , we observe six intersections between the threshold curve and . We find that, under the same total input power, the bubble growth region under multifrequency acoustic excitation does not significantly increase compared to single-frequency acoustic excitations. Although increasing the frequency can increase the number of intervals in which the bubble grows, the corresponding narrowing of the bubble growth interval at the corresponding frequency means that the growth region under excitation remains nearly unchanged. In region B, where , bubbles with radii in the region can grow under multifrequency acoustic excitation from to the final equilibrium bubble radius . Furthermore, in region C, where , we note that bubbles with radii in the region can both grow to , indicating a significant increase compared to the single frequency or . In other words, adding more low-frequency acoustic excitation is beneficial for increasing the bubble growth region. Next, we compare the pressure threshold of dual and multifrequency in Fig. 3 d, where different regions can also be distinguished using and . In region D, where , the bubble growth regions under multifrequency acoustic excitation remain almost the same as those under dual-frequency acoustic excitations, similar to region A. In region E, where ), four intersections exist between the threshold curve and . Bubbles with radii in the region can grow to the final equilibrium bubble radius . Moreover, the local maximum threshold values of dual-frequency , and will influence the growth region to a limited extent. In region F, where , two intersections exist between the threshold curve and . The situation is similar to that in region C, where the dual frequencies contain the single frequency , and the growth region increases significantly under triple-frequency acoustic excitations. Therefore, similar to dual-frequency acoustic excitation, multifrequency acoustic excitation can also expand the microbubble’s growth region by adding an additional low-frequency acoustic excitation. Influence of and Although the above discussion highlights the complex nature of bubble growth under multifrequency acoustic excitation during mass transfer processes, we can also discover some universal laws that further clarify this problem. Interestingly, as depicted in Fig. 4 , the predicted threshold value of does have common intersect points under single and triple frequencies with different and values. Here, we identify one or two fixed points for all conditions, denoted as and , respectively. By changing the value of and , these points regulate the local value of , thereby influencing the bubble growth region as emphasized above. In Fig. 4 a, when (i.e. ), the lines under single-frequency ( ) and multifrequency ( ) acoustic excitation intersect at the point , regardless of how and change. Similarly, in Fig. 4 b, when changes and remains fixed, the lines under single frequency ( ) and multifrequency acoustic excitation intersect at the point , regardless of how changes. Likewise, in Fig. 4 c, when changes and remains fixed, the lines under single-frequency ( ) and multifrequency acoustic excitation intersect at the point and , respectively, independent of the value of . Appendix A demonstrates these intersection points ( , where ) under three different conditions from the perspective of theoretical models. These intersection points indicate that the threshold of the mass diffusion under acoustic excitation conditions with multifrequency is independent on the and . When the multifrequency value ratio ( ) changes from 1:2:3 to 1:3:9, as shown in Fig. 4 d, we find that the lines under single and multifrequency under different also intersect at two points and , respectively. Despite the complexity, we still demonstrate that there are fixed intersection points of lines between the multifrequency and the single-frequency acoustic excitation. Furthermore, we note in Fig. 4 a that when , increasing and makes the threshold curves near the resonance bubble radius much narrower on the right region of the curve, while those on the left region remain almost the same as the one under single-frequency excitation with frequency. In Fig. 4 b, when varies and remain the same, decreasing makes the threshold curves near the resonance bubble radius much narrower on the left region of the curve, while those on the right region remain almost the same as the one under single-frequency excitation. However, when is less than 1 in Fig. 4 c, decreasing makes the threshold curves near the resonance bubble radius much narrower in the middle region of the curve, while the outside of the region almost remains the same as the one under single-frequency excitation. Therefore, these results demonstrate that the presence of common intersection points, a fascinating phenomenon similar to the fixed wave nodes observed in mechanical, electromagnetic, or other types of waves [59] , will affect the region where bubble growth occurs. Therefore, we can regulate the bubble growth region by controlling the value of through our demands. To further clarify how to regulate and bubble growth region through values, we investigated the impact of the pressure ratio ( and ) on the local maximum threshold pressure ( , ) of multifrequency excitation. Fig. 5 shows that and have a significant effect on the value of and , and they both cross at a point where , Pa and Pa, indicating the best value to enhance bubble growth is . In addition, we summarized the bubble growth regions under different acoustic excitations with different values of , in Table 1 . We compared the bubble growth regions under different values in the range of under single and multifrequency excitation. We found that when the value is less than 1 (e.g., ), the wide of bubble growth region (43.85 μm) under multifrequency excitation is slightly larger than that under single-frequency excitation (40.81 μm). By adding two high-frequency acoustic excitations ( and ) to the single-frequency excitation ( ), the bubble growth region can increase from approximately 40 μm to 44 μm, representing a 10 % increase. However, by adding two low-frequency acoustic excitations ( , ) to the single-frequency excitation ( ), the bubble growth region can increase from approximately 8.7 μm to 44 μm, representing a four-fold increase. However, this can cause a decrease in droplet size (from 7.82 μm to a minimum of approximately 8.48 μm, a decrease of approximately 8 %) at the onset of bubble growth. Therefore, it is necessary to appropriately adjust the N value and the magnitude of different frequencies based on the specific application scenario to control the mass transfer process of bubbles effectively. Influence of initial concentration Fig. 6 illustrates the impact of the initial uniform concentration ( Fig. 6 a) and pressure amplitude of frequency division ( Fig. 6 b) on the equilibrium bubble radius. It is observed that the growth rate of bubbles under multifrequency acoustic excitation is significantly higher than that under single and dual-frequency acoustic excitation. However, the final equilibrium bubble radius of dual and triple frequencies remains unchanged. Therefore, exceeding the saturation conditions (e.g., in Fig. 6 a) accelerates the growth rate of bubbles while maintaining the same final equilibrium bubble size. This suggests that increasing the frequency from dual to triple does not significantly affect the final size of bubbles. In addition, Fig. 6 b presents cases of multifrequency acoustic excitation with unequal amplitudes. It is observed that the bubble growth rate increases as the amplitude of acoustic fields increases, as demonstrated by the difference between the red, green, and black solid lines.
Results and discussions The frequency values considered in our discussion are in the megahertz range, which is commonly used in various fields, such as the scattering cross-section of acoustic bubbles and ultrasonic wave propagation [4] , [6] . Specifically, we considered two different ratios: and , with /s. To quantify the pressure amplitudes, we defined as the ratio of pressure amplitude, where as the ratio of pressure amplitude to , and as the ratio of pressure amplitude to . Therefore, the pressure amplitudes have a ratio of , and the total input pressure (equivalent pressure) is given by . To maintain a constant value for throughout our discussion, we keep constant for different values of and . This means that the total input power remains constant. We consider the case of air bubbles in water, and the constants used in our numerical calculation are Pa, kg/m 3 , m/s, Pa·s, mN/m, m 2 /s, J/mol/K, K, . Bubble growth region To begin, we investigate how the amplitude and frequency of multifrequency acoustic excitation impact the mass transfer process. To simplify our analysis, we assume that the amplitudes of all frequencies in the multifrequency excitation are equal to , and that the total power of the excitation remains constant for different values of and . Accordingly, the pressure amplitudes of the three acoustic excitations are given by: ; ; . The total threshold pressure amplitude and the pressure amplitudes of three acoustic excitations are denoted as , , , , respectively, and are related by the equation . Therefore, indicates that the bubble will grow until when . To verify the accuracy of our theory, we compare the changes in pressure threshold and corresponding first-order frequency (i.e., and equal to zero) threshold of our model at the second-order frequency (either or equal to zero), as shown in Fig. 3 a. We observed that the threshold pressure values ( ) under single and corresponding dual-frequency acoustic excitations intersected at one point on the value line, and the maximum pressure threshold value was observed for the dual-frequency acoustic excitations. For example, the value of under single frequency ( or ) and dual frequencies acoustic excitations ( ) intersected at the common point , which is consistent with the previous research [46] . However, unlike the situation where they share the same intersection points, the value of under multifrequency excitation did not intersect on the common intersect points with both single and dual frequencies ( Fig. 3 b, red line). Furthermore, its threshold pressure was higher than both single and dual-frequency. We distinguished the maximum value of the dual-frequency, i.e., , and in Fig. 3 a, with the maximum of the triple frequency, and in Fig. 3 b. We then compare the pressure threshold of single and triple frequencies in Fig. 3 c, which shows three regions (A, B, and C) of different based on the value of and to enable a clear comparison between these lines. A dotted line labeled as is defined, which intersects with at the points and , where the subscript , represents bubbles under acoustic excitations in the region can grow from to its final size . In region A, where , we observe six intersections between the threshold curve and . We find that, under the same total input power, the bubble growth region under multifrequency acoustic excitation does not significantly increase compared to single-frequency acoustic excitations. Although increasing the frequency can increase the number of intervals in which the bubble grows, the corresponding narrowing of the bubble growth interval at the corresponding frequency means that the growth region under excitation remains nearly unchanged. In region B, where , bubbles with radii in the region can grow under multifrequency acoustic excitation from to the final equilibrium bubble radius . Furthermore, in region C, where , we note that bubbles with radii in the region can both grow to , indicating a significant increase compared to the single frequency or . In other words, adding more low-frequency acoustic excitation is beneficial for increasing the bubble growth region. Next, we compare the pressure threshold of dual and multifrequency in Fig. 3 d, where different regions can also be distinguished using and . In region D, where , the bubble growth regions under multifrequency acoustic excitation remain almost the same as those under dual-frequency acoustic excitations, similar to region A. In region E, where ), four intersections exist between the threshold curve and . Bubbles with radii in the region can grow to the final equilibrium bubble radius . Moreover, the local maximum threshold values of dual-frequency , and will influence the growth region to a limited extent. In region F, where , two intersections exist between the threshold curve and . The situation is similar to that in region C, where the dual frequencies contain the single frequency , and the growth region increases significantly under triple-frequency acoustic excitations. Therefore, similar to dual-frequency acoustic excitation, multifrequency acoustic excitation can also expand the microbubble’s growth region by adding an additional low-frequency acoustic excitation. Influence of and Although the above discussion highlights the complex nature of bubble growth under multifrequency acoustic excitation during mass transfer processes, we can also discover some universal laws that further clarify this problem. Interestingly, as depicted in Fig. 4 , the predicted threshold value of does have common intersect points under single and triple frequencies with different and values. Here, we identify one or two fixed points for all conditions, denoted as and , respectively. By changing the value of and , these points regulate the local value of , thereby influencing the bubble growth region as emphasized above. In Fig. 4 a, when (i.e. ), the lines under single-frequency ( ) and multifrequency ( ) acoustic excitation intersect at the point , regardless of how and change. Similarly, in Fig. 4 b, when changes and remains fixed, the lines under single frequency ( ) and multifrequency acoustic excitation intersect at the point , regardless of how changes. Likewise, in Fig. 4 c, when changes and remains fixed, the lines under single-frequency ( ) and multifrequency acoustic excitation intersect at the point and , respectively, independent of the value of . Appendix A demonstrates these intersection points ( , where ) under three different conditions from the perspective of theoretical models. These intersection points indicate that the threshold of the mass diffusion under acoustic excitation conditions with multifrequency is independent on the and . When the multifrequency value ratio ( ) changes from 1:2:3 to 1:3:9, as shown in Fig. 4 d, we find that the lines under single and multifrequency under different also intersect at two points and , respectively. Despite the complexity, we still demonstrate that there are fixed intersection points of lines between the multifrequency and the single-frequency acoustic excitation. Furthermore, we note in Fig. 4 a that when , increasing and makes the threshold curves near the resonance bubble radius much narrower on the right region of the curve, while those on the left region remain almost the same as the one under single-frequency excitation with frequency. In Fig. 4 b, when varies and remain the same, decreasing makes the threshold curves near the resonance bubble radius much narrower on the left region of the curve, while those on the right region remain almost the same as the one under single-frequency excitation. However, when is less than 1 in Fig. 4 c, decreasing makes the threshold curves near the resonance bubble radius much narrower in the middle region of the curve, while the outside of the region almost remains the same as the one under single-frequency excitation. Therefore, these results demonstrate that the presence of common intersection points, a fascinating phenomenon similar to the fixed wave nodes observed in mechanical, electromagnetic, or other types of waves [59] , will affect the region where bubble growth occurs. Therefore, we can regulate the bubble growth region by controlling the value of through our demands. To further clarify how to regulate and bubble growth region through values, we investigated the impact of the pressure ratio ( and ) on the local maximum threshold pressure ( , ) of multifrequency excitation. Fig. 5 shows that and have a significant effect on the value of and , and they both cross at a point where , Pa and Pa, indicating the best value to enhance bubble growth is . In addition, we summarized the bubble growth regions under different acoustic excitations with different values of , in Table 1 . We compared the bubble growth regions under different values in the range of under single and multifrequency excitation. We found that when the value is less than 1 (e.g., ), the wide of bubble growth region (43.85 μm) under multifrequency excitation is slightly larger than that under single-frequency excitation (40.81 μm). By adding two high-frequency acoustic excitations ( and ) to the single-frequency excitation ( ), the bubble growth region can increase from approximately 40 μm to 44 μm, representing a 10 % increase. However, by adding two low-frequency acoustic excitations ( , ) to the single-frequency excitation ( ), the bubble growth region can increase from approximately 8.7 μm to 44 μm, representing a four-fold increase. However, this can cause a decrease in droplet size (from 7.82 μm to a minimum of approximately 8.48 μm, a decrease of approximately 8 %) at the onset of bubble growth. Therefore, it is necessary to appropriately adjust the N value and the magnitude of different frequencies based on the specific application scenario to control the mass transfer process of bubbles effectively. Influence of initial concentration Fig. 6 illustrates the impact of the initial uniform concentration ( Fig. 6 a) and pressure amplitude of frequency division ( Fig. 6 b) on the equilibrium bubble radius. It is observed that the growth rate of bubbles under multifrequency acoustic excitation is significantly higher than that under single and dual-frequency acoustic excitation. However, the final equilibrium bubble radius of dual and triple frequencies remains unchanged. Therefore, exceeding the saturation conditions (e.g., in Fig. 6 a) accelerates the growth rate of bubbles while maintaining the same final equilibrium bubble size. This suggests that increasing the frequency from dual to triple does not significantly affect the final size of bubbles. In addition, Fig. 6 b presents cases of multifrequency acoustic excitation with unequal amplitudes. It is observed that the bubble growth rate increases as the amplitude of acoustic fields increases, as demonstrated by the difference between the red, green, and black solid lines.
Conclusions In conclusion, we demonstrate that multifrequency acoustic excitation can enhance the mass transfer of gas bubbles in liquids. We reveal that multifrequency acoustic excitation can significantly accelerate the mass transfer process of air bubbles in liquids when its pressure exceeds a certain threshold, which is lower than that of dual-frequency acoustic excitation. Furthermore, the introduction of more frequency excitations complicates the bubble growth process and increases the number of discrete growth intervals. We identified common intersection points between triple-frequency and single-frequency acoustic excitations under equal energy input. This discovery allows for the effective control of bubble growth intervals and size by strategically adjusting the amplitude ratio parameter . By increasing the number of frequencies in the external acoustic field and rationally controlling parameters such as the relative ratios between frequencies and the amplitudes of the acoustic fields, we can generate multiple bubbles of varying sizes. Such controlling size of the growth bubble in liquids by multifrequency acoustic excitation has significant implications for the field of biomedical, environmental, and chemical reaction. In the future, the nonlinear oscillations, the shell effect of vapor bubbles, and the bubble oscillations in non-Newtonian fluids should be studied to reveal the complex nature of the nonlinear properties.
Graphical abstract Microbubble’s mass transfer under external acoustic excitation holds immense potential across various technological fields. However, the current state of acoustic technology faces limitations due to inadequate control over bubble size in liquids under external excitation. Here, we conducted numerical investigations of the mass transfer behavior of microbubbles in liquids under multifrequency acoustic excitations with different frequencies (in the MHz range), pressure amplitudes (in the range of several atmospheric pressures), and amplitude ratios. We identified various pressure threshold regions for the growth of gas bubbles (radii range from a few microns to tens of microns) and observed common intersections between single and multifrequency excitations that enable effective control of the growth intervals and final size of bubbles by adjusting the ratio of pressure amplitude and frequency value. Allocating power to the lower frequency component of multifrequency acoustic excitation is recommended to facilitate mass transfer or diffusion, as small-frequency acoustic excitation has a more significant effect than the higher frequency in the growth region. Our study provides a better understanding of the dynamics of bubbles under complex excitations and has practical implications for developing methods to control and promote bubble-related processes. Keywords
CRediT authorship contribution statement Xiong Wang: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. Xiao Yan: Investigation, Writing – original draft. Qi Min: Funding acquisition, Supervision, Writing – review & editing. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
. Intersection points Regarding the mass diffusion excited by the triple frequencies approach, the equation of threshold is Using Eqs. (17)–(19), Eq. (A1) can be expressed as, where (1) One crossing point on the right We assume that the predicted threshold curves of the single-frequency approach with frequency and one of the triple frequencies intersect at the point . According to Eqs (5), (18), and (19), the point satisfies the following two equations, where change at the same time. In threshold curve of single frequency, In the threshold curve of triple frequencies, Using Eqs. (A3)–(A4), we get Thus, and have no influence on the crossing point . Regardless of how the specific values of and change (as long as ), the predicted threshold curve of the single-frequency approach with frequency and all the predicted threshold curves of the triple frequencies intersect at this point. (2) One crossing point on the left Similarly, the predicted threshold curves of the single-frequency approach with frequency and one of the triple frequencies intersect at the point . The point satisfies the following two equations, where changes and remains constant. In the threshold curve of single frequency, In the threshold curve of triple frequencies, Using Eqs. (A6)–(A7), Thus, has no influence on the crossing point . Regardless of how the specific value of changes (as long as remains constant), the predicted threshold curves of the single-frequency approach with frequency and all the predicted threshold curves of the triple frequencies intersect at this point . (3) Two crossing points The predicted threshold curves of the single-frequency approach with frequency and one of the triple frequencies intersect at the points and . The points satisfy the following two equations, where changes and remains constant. In the threshold curve of single frequency, In threshold curve of triple frequencies, Using Eqs. (A9) and (A10), Thus, has no influence on the two crossing points. Regardless of the value of changes (as long as remains constant), the predicted threshold curves of the single-frequency approach with frequency and all the predicted threshold curves of the triple frequencies intersect at the two points and . Data availability Data will be made available on request. Acknowledgments The authors acknowledge funding support from the 10.13039/501100001809 National Natural Science Foundation of China (No. 51976104).
CC BY
no
2024-01-16 23:43:50
Ultrason Sonochem. 2024 Jan 6; 102:106760
oa_package/cd/fd/PMC10788794.tar.gz
PMC10788795
38226138
Introduction With an estimated 1.8 million species of organisms, the Amazon is known for its unique and extensive biodiversity, high endemism, and its value as a source of genetic, chemical and ecological data, as well as raw materials for the industry and pharmaceutical laboratories ( Sá et al., 2019 ). The biome covers nine Brazilian states (Acre, Amapá, Amazonas, Mato Grosso, Pará, Rondônia, Roraima, Tocantins, and part of Maranhão) representing 61 percent of the country's land area (approximately 5,217,423 km 2 ). It contains a wide variety of ecosystems, human populations, cultures, and traditional communities ( Martha-Júnior et al., 2011 ). The biome is also home to multiple scorpion species, some of which are medically relevant ( Monteiro et al., 2019 ). Accidental envenomation from their stings makes scorpions a significant public health concern, with reports of hospitalizations and deaths worldwide ( Abroug et al., 2020 ). Approximately 200 scorpion species have been recorded in the Amazon region. Martins et al. (2021) found forty-eight scorpions in the Brazilian Amazon, six of which are medically important: Tityus apiacas , T. metuendus , T. obscurus , T. raquelae , T. silvestris , and T. strandi ( Borges et al., 2021 ). Scorpion venom is a complex mixture of compounds used for defense and prey capture ( Quintero-Hernández et al., 2013 ). The venom contains a variety of compounds, such as neurotoxins that act on different ion channels through specific receptors. Venoms can be classified based on the three-dimensional structure of the toxins and the type of response elicited. In general, venom compounds are classified according to their nature/structure. Table 1 shows the most common venom compositions. Scorpion toxins are grouped into different families according to their pharmacological targets: sodium, potassium, chloride and calcium channels ( Cid-Uribe et al., 2020 ) and other cell membrane receptors ( Hakim et al., 2015 ). Toxins that act on sodium channels are called NaTx. They are classified as α-NaTx if they bind to receptor site 3, or β-NaTx, if they bind to receptor site 4. Toxins that act on potassium channels are called KTx and are grouped into seven families: α-KTx, β-KTx, γ-KTx (Ergtoxins), δ-KTx, ε-KTx, κ-KTx (Hefutoxins), and λ-KTx. Calcines and Liotoxins are calcium-channel binding toxins (CaTx). Romero-Gutierrez and colleagues considered Omegascorpins as a new CaTx subfamily ( Romero-Gutierrez et al., 2017 ). Toxins that act on chlorine channels are named ClTx with the single classification α-ClTx. Toxins that act on Transient Receptor Potential (TRP) channels are named TRPTx with the single classification α-TRPTx ( Cid-Uribe et al., 2020 ). The venom of these arachnids is a mixture of proteins, peptides, nucleotides, and amines. It targets excitable and immunological cells, especially potassium, calcium, chlorine and sodium channels. An increasing number of studies have focused on their composition and bioactivity, suggesting their potential use in medical treatments and drug development ( Ghosh et al., 2019 ). Scorpion venom toxins have been found to have several important pharmacological and insecticidal properties. These include analgesic, immunostimulatory, anticoagulant, antithrombotic, antimalarial, antiproliferative, anti-inflammatory, antiviral, anti-infectious, antiepileptic, antihypertensive, anti-osteoporotic, and antitumor effects ( Ahmadi et al., 2020 ; Ghosh et al., 2019 ). An overview of scorpion venom studies is helpful to evaluate what we know, identify research gaps, and guide future investigations ( Paul and Criado, 2020 ). The purpose of this literature review is to present and analyze studies on Amazonian scorpion venoms published between 2001 and 2021 in scientific articles or theses and dissertations available online, in order to understand the scientific advancements, challenges, and trends within this domain over the past two decades.
Discussion Martins et al. (2021) highlight the scarcity of research on scorpion venom in the Brazilian Amazon before 2001, with only one study in 2000. This suggests that interest in Amazonian scorpion venom is a recent phenomenon. Although extensive biochemical studies have been conducted since the early 21st century, it is still remarkable that only a limited number of species have been studied in the vast Amazonian biome. The research on scorpion venom in the Brazilian Amazon is still in its early stages, thus providing ample opportunities for further exploration. Despite the wealth of known pharmacological properties associated with scorpion venom and its toxins, these properties are largely unexplored in the specific context of the Brazilian Amazon. Promising avenues for investigation include untapped areas such as antiosteoporotic, antimalarial, anti-inflammatory, antiepileptic, analgesic, antineoplastic, and other potential therapeutic properties ( Ahmadi et al., 2020 ; Ghosh et al., 2019 ). Based on the comprehensive description and discussion provided thus far, along with the data presented in Table 6 , Table 7 , it is evident that scientists have achieved remarkable advancements. Though progress has been made in understanding scorpion venom biochemistry in Brazil and the Amazon, much remains unexplored due to the region's vast biodiversity. The use of scorpion venom in biotechnology offers the potential for medical and environmental breakthroughs. Studying a wider range of Amazonian scorpion species could lead to groundbreaking discoveries. Biotechnological exploration of scorpion venom has uncovered treatments for various conditions. Specifically, scorpion venom shows promise in the treatment of muscular and neurological disorders, thrombosis, and the development of region-specific anti-scorpion serums tailored to Amazonian venom profiles. Scorpion venom has great potential as a valuable resource to address medical challenges specific to the Amazon region. While highly venomous scorpions often receive considerable attention for their potential harm to humans, harmless scorpion species are also highly valuable for scientific research. One such example is Brotheas amazonicus . This species has been studied even though it poses no threat to humans. In their work, Ward et al. (2018) emphasize the continued importance of studying harmless scorpions for drug development and medicinal purposes. These harmless scorpion species may represent an untapped source of valuable compounds and medicines. The distribution of funding and collaborations highlights the predominantly national focus of Brazilian research institutions in the study of Amazonian scorpion venom. Nevertheless, the involvement of foreign institutions from seven different countries demonstrates the international interest and collaboration in this field. Particularly noteworthy is the extensive study of Tityus obscurus , which has garnered considerable attention from the global scientific community. The recognition of T. obscurus venom as a valuable resource for scientific exploration and potential biotechnological applications is highlighted by the interest of researchers globally. The specimens T. silvestris and T. apiacas are of medical interest, but there is a lack of studies on the epidemiological and clinical manifestations of the accidents ( Gomes et al., 2020 ; Coelho et al., 2016 ; Monteiro et al., 2019 ). Furthermore, there is no information on venom chemical characterization or therapeutic application based on scorpion venom during the investigation period. Martins and collaborators (2021) argue that more studies are needed to fill the gaps in the knowledge of Amazon scorpions, with a focus on the chemical composition, biological aspects, epidemiological and clinical characterization of scorpion accidents in Brazil. Documenting the collection sites of scorpions is crucial for venom studies. This data helps researchers understand the distribution of scorpions and determine future research areas. By identifying the exact regions where scorpions are located, researchers can gain valuable insights into their habitats, ecological preferences, and potential variations in venom composition among different populations. Understanding the geographical origins of the scorpions studied allows researchers to consider regional differences in venom composition and potentially correlate these variations with ecological factors, including habitat, altitude, or climatic conditions. This data helps develop targeted studies and improves our understanding of the diverse venom profiles of scorpions in different regions. Some studies have focused exclusively on synthetic toxins, indicating a distinct area of research where synthetic compounds are designed and investigated for their venom-like properties. However, there is a worrying lack of information as twelve studies (32% of the total) did not specify the origins of the scorpions or the toxins examined. This lack of information may hinder the ability to establish precise links between venom characteristics and their geographical context. Providing complete and transparent information regarding specimen origin is crucial for reproducibility, comparability, and advancing our understanding of scorpion venoms. Two studies (5.4%) utilized scorpions kept at Instituto Butantan in the state of São Paulo. This suggests the use of captive scorpions, which allows for controlled laboratory studies. Although captive scorpions may not represent the full range of venom profiles found in the wild, they allow for controlled experiments, comparative analysis, and targeted studies on captive-bred individuals. Documentation of the collection sites is therefore critical in scorpion venom studies, as it allows researchers to gain valuable insights to understand regional variations for instance, and guide future research. However, there is a need for improved reporting standards, as a significant portion of the studies did not provide information on the origin of the scorpions and toxins studied. By addressing these issues and ensuring transparent reporting, researchers can improve the reproducibility and comparability of studies, ultimately advancing our understanding of the diverse and complex world of scorpion venoms. Advancing research on scorpion venom in the Brazilian 10.13039/100022984 Amazon requires overcoming funding and collaboration challenges ( Ubfal and Maffioli, 2011 ). Securing enough funds is key to conducting comprehensive studies, purchasing equipment, building facilities, and obtaining research materials. Additionally, fostering collaboration among researchers and institutions is essential for pooling expertise, sharing resources, and addressing the complexities of scorpion venom research. These collaborative efforts facilitate the exchange of knowledge and provide access to a variety of scorpion species, resulting in synergistic research initiatives. Combining resources and expertise helps researchers overcome the challenges of the diverse Amazon biome, leading to impactful discoveries in scorpion venom research. Therefore, securing funding and fostering collaborations are essential for successful research in this critical area. Addressing the scarcity of professionals in scorpion venom research requires a focus on education. The shortage of experts in this field can be partially attributed to the absence of specialized training programs specifically designed for scorpion venom studies. To improve our understanding of the intricate nature of scorpion venom and harness its potential for various applications, it is imperative to prioritize the development and expansion of educational initiatives. Establishing dedicated academic programs and courses is crucial for providing aspiring researchers with the necessary knowledge and skills to navigate the complexities of scorpion venom research. Collaboration with academic institutions, research centers, and industry partners is needed to design curriculum modules that cover a wide range of disciplines, including biochemistry, pharmacology, and toxinology. By promoting a multidisciplinary approach through educational frameworks, future professionals can acquire the essential interdisciplinary expertise needed to address the multifaceted nature of scorpion venom research. These efforts must be accompanied by public awareness of the importance of scorpion venom research. Educating the public on the potential medical, pharmaceutical, and biotechnological applications of scorpion venom can generate support for educational programs and research endeavors. Public engagement can also help to dispel misconceptions about the field and highlight its relevance to scientific and societal progress.
Conclusion During the period considered in this review, only a few Amazonian scorpions from Brazil were studied for their venom. There is still a lot of research to be done in this area. While the venom of these scorpions holds significant potential for pharmaceutical and clinical applications, it remains mostly unexplored and poorly understood. The Amazon's rich biodiversity provides an opportunity to uncover novel bioactive compounds that can shed light on envenomation processes unique to the region. These compounds could not only improve the effectiveness of antivenoms but also contribute to the development of new technologies and therapeutics, in line with the goals of the Global One Health initiative. Securing adequate funding and fostering collaboration between institutions are crucial to the success and continuity of research. The Amazon rainforest is home to a vast array of known and unknown species, many of which possess unique compounds that could be used for therapeutic purposes. Collaborating with other institutions and providing funding will help us identify and utilize these compounds in the treatment of venomous stings, while also developing novel technologies and therapies that can benefit global health.
The Amazon biome is home to many scorpion species, with around two hundred identified in the region. Of these, forty-eight species have been reported in Brazil so far and six of them are of medical importance: Tityus apiacas, T. metuendus, T. obscurus, T. raquelae, T. silvestris , and T . strandi . Three non-medically important species have also been studied: Opisthanthus cayaporum , Brotheas amazonicus and Rhopalurus laticauda . The venom of the scorpion T. obscurus is the most studied, followed by O. cayaporum . We aim to update the study of these Amazonian scorpion species. We will explore the harmful and beneficial properties of scorpion venom toxins and how they could be applied in drug development. This systematic review will focus on collecting and analyzing venoms from scorpions in Brazil. Only papers on Amazonian scorpion venom studies published between 2001 and 2021 (scientific articles, theses, and dissertations) were selected, based on the lists of scorpions available in the literature. Species found in the Amazon but not confirmed to be Brazilian were omitted from the review. Theses and dissertations were chosen over their derived articles. We found 42 eligible studies (13 theses, 27 articles and 2 patents) out of 17,950 studies and a basic statistical analysis was performed. The literature showed that T. obscurus was the most studied venom with 28 publications, followed by O. cayaporum with seven articles, B. amazonicus with four articles, T. metuendus with two article and R. laticauda with one article. No publication on the characterization of T. silvestris and T. apiacas venoms were found during the reviewed period, only the clinical aspects were covered. There is still much to be explored despite the increasing number of studies conducted in recent years. Amazonian scorpions have promising potential for pharmaceutical and clinical applications. Graphical abstract Highlights • Only a 5 of the 48 reported scorpions in the Brazilian Amazon have been studied for their venom. • Tityus obscurus and T. metuendus are of medical importance. • Opisthacanthus cayaporum , Brotheas amazonicus , Rhopalurus laticauda remain poorly studied. • T. obscurus venom is the most researched, presumably because of its medical importance. Keywords Handling Editor: Ray Norton
Overview of scorpion knowledge in Amazonia We conducted a systematic review of publications from 2001 to 2021 that described studies on Amazonian scorpion venom. This was based on scorpion lists provided by Brazil and Porto (2010) and Borges et al. (2021) . We excluded studies on species found in the Amazon, but not listed as Brazilian. Due to the greater amount of information, theses and dissertations were chosen over their derived articles. We included theses and dissertations in the literature review to provide an in-depth analysis of the results and to help contextualize the research. Future perspectives of this work are also detailed. To retrieve scientific papers published or available online, we consulted Google Scholar, the Brazilian Digital Library of Theses and Dissertations (BDTD, http://bdtd.ibict.br/vufind/ ), the Catálogo de Teses e Dissertações da CAPES , PubMed, the Virtual Health Library, the Rede Iberoamericana de Innovación y Conocimiento Científico - REDIB, the Networked Digital Library of Theses and Dissertations – NDLTD, the EBSCO Open Dissertations, Cochrane Library, the National Institute of the Industrial Property (INPI) and Espacenet using the following keywords in English and Portuguese: “peçonha”, “venom”, “scorpion venom”, “peçonha de escorpião”; “scorpion venom + amazon”; “Amazon scorpion”, “peçonha de escorpião + amazônia”; “scorpion venom characterization”, names of scorpion species and others related to the research. Literature selection Google Scholar found 2,650 results; the BDTD 151 results; the Catálogo de Teses e Dissertações da CAPES, 396 results; PubMed, 24; the Virtual Health Library, 16; the Cochrane Library, 68; the NDLTD, 9,375; EBSCO, 62; and the REDIB, 138. Among these results, 40 different papers met the selection criteria for this review: 13 theses and dissertations, and 27 journal articles. INPI ( https://busca.inpi.gov.br/pePI/jsp/patentes/PatenteSearchBasico.jsp ) showed 31 results but only two of them were related to Amazon scorpions. An Espacenet search ( https://worldwide.espacenet.com ) showed that only 3 out of 38 results were indirectly related to Amazon scorpion venom. All papers retrieved from BDTD, CAPES, PubMed, the Virtual Health Library, and the Cochrane Library were found using Google. This proved to be the most effective search engine, covering more online scientific papers than any other search engine currently available ( Martín-Martín et al., 2021 ). Fig. 1 shows a flow diagram detailing the selection process in the databases. General information Scorpion species and publications The reviewed papers encompassed five different species, as presented in Table 2 and Fig. 2 . The species Tityus obscurus was referred to by its synonym T. cambridgei in 11 studies. The species Rhopalurus laticauda was referred to by its synonym R. crassicauda . Over the last two decades, about 10% of scorpion species from the Brazilian Amazon region had their venoms studied, with two species being medically important. The most studied species is T. obscurus, which has also been observed by Martins and collaborators (2021). The studies were published homogeneously throughout the period considered in this review, as shown in Fig. 3 . Study objectives The objectives of the studies could be divided into eight categories. Fig. 4 , Fig. 5 show that most of the studies attempted to describe the molecular diversity of venoms from T. obscurus, O. cayaporum , B. amazonicus , T. metuendus and R. laticauda using biological assays. The only comprehensive study appears to be on T. obscurus , as it is one of the most medically important species in the region ( Amado et al., 2021 ). All five species have been subjected to chemical characterization of their venom (e.g. Abreu et al., 2020 ; Batista et al., 2018 ; Camargos, 2009 ; Dias et al., 2018 ; Higa, 2008 ). Material origin Information regarding the collection sites of the scorpions could help to draw conclusions based on their distribution and guide future research. Table 1 shows that the toxin from the studied Amazonian scorpions came from Pará, collected in Santarém, Benevides, Belterra, Marajó Island, and Floresta Nacional do Tapajós. Some were also collected in Manaus (state of Amazonas), Palmas (Tocantins) and Boa Vista (Roraima). Five studies (12%) focused exclusively on synthetic toxins. Unfortunately, fourteen studies did not provide information on the origin of the scorpions and the toxins studied, representing 35% of the studies. Two studies (5%) used scorpions kept at Instituto Butantan in the state of São Paulo. Scorpions from Amazonas were collected in Manaus and other unknown locations. Fig. 6 shows the origin of the scorpions and toxins cited for each state. Table 3 provides details on the origin of the specimens and their associated toxins for each species. Study location, funding and partnerships Brazilian research institutions were in charge of 70% of the studies on Amazonian scorpion venom. These institutions received funding from various sources, including from other institutions in Brazil (28 studies), Mexico (2 studies), and Belgium (2 studies). Partnerships were also formed with research institutions from several countries, including 11 collaborations in Brazil, 3 in Mexico, and 1 each in Belgium, Italy, Germany, Colombia, the United States, and the United Kingdom. Three studies did not report funding sources, and 10 studies did not report any partnerships. While Brazilian research institutions dominated, foreign institutions accounted for 27% of the studies on Amazonian scorpion venom. These studies were primarily conducted by institutions from Mexico (4 studies) and Taiwan (4 studies), with contributions from institutions in Italy (1 study) and the United Kingdom (1 study). Foreign funding for these studies came from different countries, including Mexico (3 studies), the United States (2 studies), Taiwan (1 study), and the United Kingdom (1 study). However, five studies did not report funding sources, and four studies did not report any partnerships. Knowledge of scorpion species In this section, we present the most extensively Amazonian scorpion species described in the literature during the study period. In the following sections, we will describe and highlight the main characteristics observed in each Amazonian species studied. Tityus obscurus Gervais, 1843 T. obscurus (family Buthidae), also known as T. cambridgei , T. paraensis and T. amazonicus , is a species of significant medical importance ( Pardal, 2014 ). Adults are black, while juveniles have light spots. Its venom has been extensively studied, including toxin characterization, electrophysiological characterization, phylogenetic and structural analysis of the toxins, lethal activity analysis, antimicrobial, cytotoxicity and retroviral evaluation, molecular cloning and sequencing. According to the reviewed literature, the venom and toxins of T. obscurus are complex and can help improve and understand scorpionism treatment, antivenoms and epidemiology for the Amazonian population. T. obscurus venom triggers a complex mechanism of envenoming pathogenesis. Moreover, studies on T. obscurus venom help to improve the treatment of diseases affecting the nervous and muscular systems, as well as infections caused by retroviruses, fungi and mycobacteria, and other diseases caused by enzymes, idiopathic pulmonary fibrosis, and enhance immunity. Due to its toxin activity, the venom has potential for treatments targeting neurotransmitter release, hormone secretion, regulation of fluid secretion and lymphocyte activation. The promising research on T. obscurus ’ venom can potentially aid in the development of treatments for a wide range of diseases, from neurological and muscular system disorders to infections. Batista and colleagues were the first to study and chemically characterize its venom. They discovered the following toxins: Tc48a, Tc49a, Tc49b, and Tc54 which recognize sodium channels ( Batista et al., 2002a ). They also isolated and described the toxins Tc30 and Tc32 as potent suppressors of potassium currents in human T lymphocytes ( Batista et al., 2002b ). In 2004, this group identified and described 26 sodium channel toxins (Tc1, Tc27, Tc29-33, Tc35, Tc37, Tc39-41, Tc43, Tc46, Tc48a, Tc48b, Tc49a, Tc49b, Tc50, Tc54, Tc56, Tc58, Tc61, Tc64, Tc66, and Tc83). Murgia et al. (2004) reported that Tc48b affects sodium permeability in pituitary GH3 cells. The toxin Tc54 was renamed To4 by Duque et al. (2017) after being electrophysiologically characterized as exhibiting a beta-type effect on different human sodium channel isoforms, exhibiting a beta-type effect on these channels. Liu & Lin ( Liu and Lin, 2003 ) observed that Tc1 prefers the Kv1.1 potassium channel due to stronger electrostatic and hydrophobic interactions. Wang et al. (2009) demonstrated that a synthetic version of Tc1 has the same functional properties as the natural toxin, being a stable potassium channel blocker. Grottesi et al. (2003) studied the flexibility of Tc1 and found that it shares a common fold with agitoxin-2 and charybdotoxin from the scorpion Leiurus quinquestriatus var. hebraeus . Guerrero-Vargas et al. (2012) isolated 15 sodium channel toxins (To1–To15), noting multiple names for the same toxins in the literature. They hypothesized a substantial cladistic difference between toxins produced by congeneric scorpions in south-eastern South America and those produced by northern Amazon basin scorpions. The authors also demonstrated that the alpha-class NaScTxs are closely related to Tpa4, Tpa5, Tpa6, To6, To7, To9, To10, and To14, whereas the beta-class NaScTxs are more closely related to Tpa7, Tpa8, To4, To8, To12, and To15 sequences. To5 may be an arthropod-specific toxin. In T. obscurus venom, Dias (2016) detected 517 peptides (27 sequenced) and 46 other non-peptidic compounds. Four peptides exhibited hemolytic activity. Tibery et al. (2019) isolated the beta-toxins Tc48b (or Tc49a) and Tc49b. Tc49b could inhibit most sodium channel isoforms, thereby altering the open probability during activation and steady-state inactivation in human cells. Dias et al. (2018) detected 27 peptides ranging from 400 to 4,000 Da in T. obscurus . Thirteen were biologically tested and caused hemolysis, as well as lactate dehydrogenase release from the mast cell cytoplasm into the surrounding environment. This potentiated significant inflammatory processes and changes in locomotion and lifting capacity. Huang (2004) and Chang (2016) studied the alpha-toxin Tc32 and demonstrated its inhibition of Shaker B and Kv1. x channels and described how the interaction occurs. Stehling et al. (2012) compared Tc32 with the TdK2 and TdK3 toxins. He hypothesized that the affinity and selectivity of the toxins are determined by differences in their electrostatic properties, contact surfaces and total dipole moment orientations. Fig. 7 shows the protein and peptide components observed by De Oliveira and collaborators (2018). While the toxins shared similarities with some putative toxins found in other Tityus species, such as P84688, P84685, H1ZZH7, P60213, H1ZZIO, P60214, PO1496, H1ZZI3, H1ZZ12, and P60212, the authors emphasize that the T. obscurus venom components are not recognized by anti- T. serrulatus venom serum. Interestingly, Pardal (2014) compared the venom of two T. obscurus populations from two regions of Pará and observed differences. The venom of western scorpions had more components than that of eastern scorpions, with more potassium and sodium modulators. This is why incidents in the western region of Pará are more severe than those in the eastern region. Additionally, the author recommends a taxonomic revision. This highlights that venom studies can significantly contribute to both species classification and phylogenetics. Borja-Oliveira et al. (2009) observed in rat skeletal muscle that a 10 g/mL venom solution generated a gradual and sustained increase in contractile force for 120 min. Only higher concentrations promoted transient potentiation. They hypothesized that venom could be used clinically. Additionally, de Paula Santos-da-Silva et al. (2017) noted that rats showed signs of envenomation approximately 30 min after injection, with a peak of systemic effects 60 min after. This injection was performed using 0.1 mL/100 g of sodium chloride solution. Some of the effects observed were: hemorrhagic patches in the lung parenchyma and pleural regions at 10 mg/kg; also, extravasation of red blood cells in the parenchyma; decrease in general and locomotor activity at 60 min; breathing difficulty; piloerection; palpebral ptosis; excessive oral and nasal secretions; somnolence; photophobia; priapism; “wet dog shakes”, and immediate diuresis. In biological tests from a biotechnological perspective, Marques-Neto et al. (2018) demonstrated that ToAP2 suppresses the growth of four Mycobacterium massiliense strains at 200 μM. It reduced the bacterial load in the liver, lung and spleen of mice, and recruits monocytes, neutrophils and eosinophils. Guilhelmelli et al. (2016) isolated and recorded that ToAP3, ToAP2-ToAP4, ToAcP, and NDBP-4.23 have antifungal properties against filamentous fungi and yeast such as Candida . Later, Ferreira & Carvalho (2017) isolated P42 (probably To4 or another) and demonstrated its antifungal activity against yeast strains: C. albicans , C. tropicalis and C. parapsilosis , and its antibacterial activity against Escherichia coli and Staphylococcus aureus . Da Mata et al. (2020) tested eight synthetic peptides and demonstrated that the P6 peptide has low cytotoxic activity against primary human leukocytes. It also has high antiretroviral activity against simian immunodeficiency virus replication in the HUT-78 cell line. De Holanda and Júnior (2019) observed that ToAP3 and ToAP4 can suppress inflammatory responses and modulate the activation and maturation of dendritic cells in mice, making them suitable candidates for anti-inflammatory therapies. Simon et al. (2018) also tested these toxins against early-stage idiopathic pulmonary fibrosis in rats and found that the toxins stabilized lung damage and slowed disease progression. Mourão (2016) isolated ToPI1 and synthesized it (ToPI1s). Its activity against trypsin in chromogenic assays and lack of adverse effects in mice make it a good candidate for therapeutic purposes. Opisthacanthus cayaporum Vellard, 1932 O. cayaporum (family Hormuridae) is a black scorpion from the south of Pará to the central region of Tocantins, reaching between 7 and 9 cm in length. It has no medical importance ( Schwartz et al., 2008 ). Its venom underwent purification and characterization of its peptides, functional characterization and evaluation of its antifungal activities, and transcriptomic studies. Specimens were predominantly collected in Tocantins. Schwartz et al. (2008) detected 250 different components in the venom, including a peptide with 65% similarity to the α-KTx 6.10 toxin (OcKTx5). They suggested that the venom was insect-specific, harmless to mammals, and had phospholipase and antibacterial activity. Later, Schwartz et al. (2013) studied the OcyKTx2 peptide and described it as having 34 amino acids, four disulfide bridges, and a molecular weight of 3,807 Da. They compared it to other toxins and demonstrated that it acts on Shaker B and Kv1.3 channels at nanomolar concentrations. Silva (2008) characterized scorpion venom gland transcripts by building a cDNA library with 67 distinct sequences. This library included toxin-like sequences and others involved in gene and protein expression. The peptide Cayaporina (NDBP 3.7) exhibited antimicrobial activity against E. coli and S. aureus , with no hemolytic activity in human erythrocytes. Camargos (2009) identified the potassium channel blocker κ-KTx 2.5 (3 kDa), and partially sequenced a Scorpine-like and a non-disulfide bridged peptide (NDBP) OcCT2f, which showed antimicrobial activity and warrants further investigation. The peptide κ-KTx 2.5 was later investigated by Camargos et al. (2011) and had no effect on E. coli and S. aureus at 128 mM. Guilhelmelli et al. (2016) studied the effects of three peptides from O. cayaporum as antifungals: Con10 (27 amino acids long), NDBP-5.7 (13 amino acids), and NDBP-5.8 (14 amino acids). Con10 showed antifungal activity, particularly against Candida albicans . NDBP-5.7 and NDBP5.8 displayed activity against C. albicans and C. tropicalis . Brotheas amazonicus Lourenço, 1988 B. amazonicus (family Chactidae) is a black scorpion with reddish tips and telson ( Martins et al., 2021 ), found in Amazonas, Roraima, and Rondônia, and known for its low lethality venom. Its venom was subjected to molecular characterization, biological activity analysis and evaluation for potential biotechnological uses. Higa (2008) demonstrated that its venom does not induce bleeding or blood coagulation in mice. This confirmed its low toxicity and that its toxins have potent analgesic activity against inflammatory pain, suggesting potential value for analgesic drug development. The author also found that its venom exhibits phospholipase A 2 activity. He suggests that its 7080 Da serine proteases are responsible for the proteolytic activity. Higa et al. (2014) also demonstrated that the venom can degrade bovine fibrinogen without fibrin clot formation. This makes it a potential candidate for antithrombotic drugs and vaccines against scorpion envenomation. Ireno (2009) identified 201 molecular species, including peptides ranging from 0.8 to 17 kDa, and sequenced eight peptides. Tityus metuendus Pocock, 1897 T. metuendus (family Buthidae) is a medically significant species from the Amazon. It has a reddish-black coloration. Batista et al. (2018) demonstrated that the venom collected in Manaus (Amazonas) is highly toxic to mammals and lethal to mice even at low concentrations. The venom contains alpha and beta-toxins closely resembling those found in T. obscurus . Among the various proteins and peptides, the authors aim to identify sodium and potassium channel toxins, hyaluronidases, metalloproteinases, endothelin, and angiotensin-converting enzymes, allergens, and bradykinin-potentiating peptides in the venom. This study highlights the need for further research. Rhopalurus laticauda Thorell 1876 R. laticauda (family Buthidae) is found in Roraima, south of Guyana and Venezuela, in deciduous forests and semi-arid regions. This species ranges in size from 45 to 70 mm, has a yellowish-brown coloration with a dark tail, and can be found under rocks, tree barks and fallen logs ( Martins et al., 2021 ). Abreu et al. (2020) conducted a comprehensive study of its venom, using samples from Boa Vista, the capital of Roraima. They isolated the major toxin Rc1 , weighing about 6.5 kDa. This toxin represented 24 percent of the total protein of the soluble crude venom and was classified as a beta-neurotoxin. The crude venom could not be recognized by Brazilian antivenoms. However, a fraction of the venom containing hyaluronidase was recognized by the general arachnid antivenom. It was found to be specific to mammalian and insect voltage-gated sodium channels and exhibited cytotoxic effects and strong pro-inflammatory activities. Patent application Despite the vast biodiversity of scorpions in the Amazon, research has primarily focused on the venom of the T. obscurus species in the context of patent applications ( Table 4 ). These applications pertain to the antimicrobial peptide and trypsin inhibitor activities of the species and are typically owned by universities and research institutes ( De Marco Almeida et al., 2015 ). Other inventors have looked into the possibility of knotting peptides derived ( Table 5 ) from known entities, including toxins or proteins associated with venom. Among these, the Amazonian scorpion T. obscurus has been identified as a source, with applications ranging from therapeutic agents against cartilage disorders. The ToAP2 is a non-disulfide-bridged antimicrobial peptide (NDBP) derived through bioinformatics analysis of a cDNA library sourced from the venom gland of the scorpion T. obscurus . It has exhibited potent antimicrobial activity against Mycobacterium massiliense strains (GO01, GO06, GO08, and CRM0020) as reported by Trentini et al. (2017) and Marques-Neto et al. (2018) . Moreover, this peptide has demonstrated antifungal properties against Cryptococcus spp. and Candida albicans , with Freitas et al. (2020) highlighting its efficacy at low concentrations and minimal toxicity to mammalian cells. Another noteworthy patent invention stems from toxins found in T. obscurus , resulting in four peptides: ToPI1s, ToPI1-K21A, cToPI1s, and cToPI1-K21A. These peptides exhibit potent trypsin-inhibiting activity, originating from a modification of a scorpion venom peptide ( Schwartz et al., 2020 ). ToPI1s and ToPI1-K21A consist of 33 amino acid residues, three e bonds, and C-terminal amidation. On the other hand, cToPI1s and cToPI1-K21A, through interaction with trypsin, adopt a cyclic structure with 32 residues and a Cys-Stabilized Alpha/Beta configuration, as shown by Mourão (2016) and Schwartz et al. (2020) . These peptides offer several advantages, including high chemical and thermal stability, lack of cytotoxicity in fibroblasts, low activity in potassium channels, and an absence of behavioral. These attributes render them attractive for diverse therapeutic applications, such as antiretrovirals, antitumor agents, or probes. According to Schwartz and collaborators (2020), the peptide ToPI1-K21A has been noted for its lower incidence of side effects when administered in mammals. Olso et al. (2017 ; 2018) and Hopping (2019) , along with their collaborators, proposed a pharmaceutical composition and method to target drug delivery to a specific region through a knotted peptide ( Table 5 ), which may be a variant peptide belonging to a family member derived from different organisms, including T. obscurus . Funding This work was supported by the Pro-Rectory of Research and Post-Graduation (PRPPG) of the Federal University of Roraima (UFRR), Edital 14/2022 and the Association Plateforme BioPark d’Archamps (France) for supporting part of this work through its research and development program. Ethics in publishing We, Joel Ramanan da Cruz, Philippe Bulet and Cléria Mendonça de Moraes, are the authors of the manuscript entitled “Exploring the potential of Brazilian Amazonian scorpion venoms: a comprehensive review of research from 2001 to 2021”. In the common consensus, we have agreed for authorship, read and approved the manuscript submitted. Our agreement also includes the subsequent publication following approval by the Editors. We confirm that there is no conflict of interest. This work does not involve the use of human or animal subjects, because it is based on secondary studies available in the literature. It is important to mention that we have cited the original source properly. We have adhered to Good Publication Practices (GPP) and Good Science. CRediT authorship contribution statement Joel Ramanan da Cruz: Writing - review & editing, Writing - original draft, Validation, Resources, Methodology, Formal analysis, Data curation, Conceptualization. Philippe Bulet: Writing - review & editing, Validation, Resources, Funding acquisition, Formal analysis. Cléria Mendonça de Moraes: Writing - review & editing, Validation, Supervision, Project administration, Methodology, Funding acquisition, Formal analysis, Data curation, Conceptualization. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:Cleria Mendonca de Moraes reports financial support was provided by Federal University of Roraima. Philippe Bulet reports financial support was provided by 10.13039/501100004794 National Centre for Scientific Research . If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability No data was used for the research described in the article. Acknowledgements We would like to thank Nora Touqui for improving the English version of the manuscript.
CC BY
no
2024-01-16 23:43:50
Toxicon X. 2023 Dec 29; 21:100182
oa_package/12/95/PMC10788795.tar.gz
PMC10788796
38226227
Introduction The water temperature in a fish tank is important for fish health [ 1 ]. Intense water temperature variation can cause thermal stress and illness because most fish depend on the temperature of the aquatic environment to regulate their internal temperature [ 2 ]. Fish actively seek the optimal temperature in a thermally inconsistent environment, resulting in metabolic issues and fatigue [ 3 ]. Thus, a fish tank without a hot spot could be beneficial for the survival of animals, including pet fish. A conventional aquarium heater (AH) provides automatic control of water temperature in the range from 20 °C to 34 °C [ 4 ]. An AH operates inside the fish tank assisted by a pump to diffuse heated water. However, using agitation to assist heat diffusion compromises the device efficiency and might introduce noise-induced stress [ 5 ]. Thus, the use of AH without circulation produces localized heating shown in Fig. 1 (a), which causes water temperature variation, resulting in thermal stress to pet fish. Various plane heaters have been proposed to overcome localized water heating in a fish tank. Conductive glass walls exhibit good transparency and robust design but lack flexibility and scalability [ 6 ]. A transparent panel heater shows good flexibility, but it has low productivity [ 7 ]. An adhesive heater film is flexible and transparent, but its scalability is limited [ 8 ]. A heater film should have facile scalability and uniform heating for a variety of fish tanks. A transparent heater film (TH) incorporates a conductive layer made of carbon, metal, oxides, etc., supported on polymer sheets [ 9 , 10 ]. Among these materials, metal-containing conductive layers show promise in transparent conductive film applications due to their relatively high electrical conductivity. A TH made of silver nanofiber shows low sheet resistance, but spun layers poorly adhere to plastic sheets [ 11 ]. Silver nanowire or silver nanowire composite films show good transparency and facile fabrication, but their sheet resistance is high at ∼10 Ω/□ [ [12] , [13] , [14] ]. TH can be prepared by a metal mesh with a UV embossing method, which results in good transparency, low sheet resistance, and high productivity [ 15 ]. Here, we propose a plane heating technique for fish tanks with a TH. The TH can produce a uniform thermal surface around the fish tank. The optical transparency of the TH is visually attractive for exhibition of the fish in the tank. Plane heating relies on an extended thermal surface area and lower working temperatures, as shown in Fig. 1 (b).
Methodology Device The TH consists of a heating surface made of a metal mesh on a polyethylene terephthalate (PET) film, as depicted in Fig. 2 (a) [ 16 ]. A heating surface is desired to produce a uniform temperature field. A TH has optical transparency and enables aesthetically pleasing attachment to a fish tank wall, as shown in Fig. 2 (b). THs should have higher electrical resistance than transparent electrodes to reduce the current capacity of the power supply wires [ 17 ]. Transparent electrodes require low sheet resistance to enhance electrical conductivity and reduce contact loss [ 18 ]. Fabrication process The fabrication process of a TH is shown in Fig. 3 and includes the following steps: (1) produce of a PDMS (polydimethylsiloxane) replica from a nickel master mold, (2) transfer the patterns from a PDMS mold onto a PET substrate with UV embossing method, (3) fill in the micro trenches of the mesh pattern with silver paste by doctor blade technique, and (4) create a power bus with copper tape and coat a 4-mm-width stripe connecting the metal mesh and copper tape with silver paste [ 15 ]. The metal mesh is based on a grid design of 800-μm pitch and 10-μm linewidth. The heating surface area is 20 × 20 cm 2 , and its electrical resistance is 1.6 Ω. The electrical resistance is readily tunable by the aspect ratio or shadowing factor of the metal mesh [ 19 , 20 ]. Testing setup in a fish tank The testing setup is prepared as follows: fish tank (30 × 30 × 30 cm 3 ) in Fig. 4 , tap water, and supplied power (100 W, DC). Both AH ( Fig. 4 A) and TH ( Fig. 4 B) use 100 W via ON/OFF control at a duty cycle of 70 % and a control period of 12 min. The coil resistance of AH is 440 Ω, and the resistance of 3 sheets of TH is 4.8 Ω. AH is operated inside the fish tank, and 3 sheets of TH are attached to external fish tank walls. The heating surface and water temperature at the bottom, middle and shallow levels are monitored by K-type thermocouples ( Fig. 4 TC) (−267–260 °C). The data acquisition (DAQ) platform uses a 16-channel temperature module (sensitivity: 0.02 °C), and the temperature data are saved every 370 ms. The computer-based interface allows simultaneous ON/OFF control and data processing.
Results and discussion Physical properties of the transparent heater film The thickness of the metal mesh is 17 μm, as shown in Fig. 5 (a). The average visible transmittance is 81 %, which is 10 % less than that of the 100-μm PET film, as shown in Fig. 5 (b) [ 21 ]. The sheet resistance (Rs) is 0.6 Ω/□. At 20 W, the average temperature in air is 57 °C, and that attached externally to the plastic cup is 52 °C, as displayed by the infrared images in Fig. 5 (c)–(d). The TH is assessed based on criteria for transparent conductive films, which results in a figure of merit (FOM) of 2.8 × 10 3 , suggesting effective electrical conductivity and good transparency, as shown in Table 1 [ 22 ]. Water temperature monitoring with thermocouples The temperature of the heating surface or interface of an AH and a TH with 100 W is depicted in Fig. 6 (a). The peak temperature with AH is 49 °C and that with TH is 33 °C. The temperature difference during ON/OFF control results in 20 °C with an AH and 10 °C with a TH. The TH produces an extended heating surface around the fish tank, whereas an AH directly exposes a hot rod with a small surface area to the aquatic environment. The water temperatures in the center of a fish tank at three different depths are shown in Fig. 6 (b). The water temperature rise at the center of the fish tank begins after 6 min with a TH compared to 29 min with an AH. Additionally, Fig. 7 shows that to increase the water temperature from 23 °C to 24 °C, it takes 28 min with a TH operating at 33 °C, while it takes 50 min with an AH operating at 49 °C. After heating for 1 h, the water temperature is 25.4 °C with a TH and 24.9 °C with an AH. The heating capacity with a TH is not compromised by heat loss to the atmosphere. Faster heat diffusion with a TH results from heat transfer enhancement due to extended heating surface area. Water temperature measured with an infrared camera The temperature fields are obtained with a portable infrared camera (FLIR One Pro LT, -20–120 °C). The fish tank setup is shown in Fig. 8 (a), where the IR camera is positioned 55 cm from the water surface. The initial thermal state is displayed in Fig. 8 (b). After heating for 30 min, a large temperature gradient (between 28 and 32 °C) is caused by the AH, while a uniform temperature field (at 26 °C) with a reduced thermal gradient is produced by the TH, as displayed in Fig. 8 (c)–(d). Theoretical calculation of the heat transfer rate ratio Heat transfer occurs by external convective flow with an AH and internal convective flow with a TH [ 23 ]. The heat transfer rate is described by Fourier's law of conduction for thermal conduction and Newton's law of cooling for thermal convection, as shown in Table 2 [ 24 ]. The ratio of the heat transfer rate (Q̇) between the TH and the AH is estimated based on the following information: (1) temperature difference during ON/OFF control; (2) water at 23 °C, AH at 49 °C, and TH at 33 °C. Heat transfer enhancement with the TH relative to the AH corresponds to a factor of 6 for thermal conduction and a factor of 4.6 for thermal convection. These improved thermal properties are attributed to the extended heating surface area of the TH. Glass has a higher thermal conductivity (κ = 1.06 W/m·K, 296 K) than water (κ = 0.61 W/m·K, 300 K) [ 25 , 26 ]. Thus, fish tank walls with a TH form a planar uniform thermal surface, while a hot region is produced around an AH glass housing.
Results and discussion Physical properties of the transparent heater film The thickness of the metal mesh is 17 μm, as shown in Fig. 5 (a). The average visible transmittance is 81 %, which is 10 % less than that of the 100-μm PET film, as shown in Fig. 5 (b) [ 21 ]. The sheet resistance (Rs) is 0.6 Ω/□. At 20 W, the average temperature in air is 57 °C, and that attached externally to the plastic cup is 52 °C, as displayed by the infrared images in Fig. 5 (c)–(d). The TH is assessed based on criteria for transparent conductive films, which results in a figure of merit (FOM) of 2.8 × 10 3 , suggesting effective electrical conductivity and good transparency, as shown in Table 1 [ 22 ]. Water temperature monitoring with thermocouples The temperature of the heating surface or interface of an AH and a TH with 100 W is depicted in Fig. 6 (a). The peak temperature with AH is 49 °C and that with TH is 33 °C. The temperature difference during ON/OFF control results in 20 °C with an AH and 10 °C with a TH. The TH produces an extended heating surface around the fish tank, whereas an AH directly exposes a hot rod with a small surface area to the aquatic environment. The water temperatures in the center of a fish tank at three different depths are shown in Fig. 6 (b). The water temperature rise at the center of the fish tank begins after 6 min with a TH compared to 29 min with an AH. Additionally, Fig. 7 shows that to increase the water temperature from 23 °C to 24 °C, it takes 28 min with a TH operating at 33 °C, while it takes 50 min with an AH operating at 49 °C. After heating for 1 h, the water temperature is 25.4 °C with a TH and 24.9 °C with an AH. The heating capacity with a TH is not compromised by heat loss to the atmosphere. Faster heat diffusion with a TH results from heat transfer enhancement due to extended heating surface area. Water temperature measured with an infrared camera The temperature fields are obtained with a portable infrared camera (FLIR One Pro LT, -20–120 °C). The fish tank setup is shown in Fig. 8 (a), where the IR camera is positioned 55 cm from the water surface. The initial thermal state is displayed in Fig. 8 (b). After heating for 30 min, a large temperature gradient (between 28 and 32 °C) is caused by the AH, while a uniform temperature field (at 26 °C) with a reduced thermal gradient is produced by the TH, as displayed in Fig. 8 (c)–(d). Theoretical calculation of the heat transfer rate ratio Heat transfer occurs by external convective flow with an AH and internal convective flow with a TH [ 23 ]. The heat transfer rate is described by Fourier's law of conduction for thermal conduction and Newton's law of cooling for thermal convection, as shown in Table 2 [ 24 ]. The ratio of the heat transfer rate (Q̇) between the TH and the AH is estimated based on the following information: (1) temperature difference during ON/OFF control; (2) water at 23 °C, AH at 49 °C, and TH at 33 °C. Heat transfer enhancement with the TH relative to the AH corresponds to a factor of 6 for thermal conduction and a factor of 4.6 for thermal convection. These improved thermal properties are attributed to the extended heating surface area of the TH. Glass has a higher thermal conductivity (κ = 1.06 W/m·K, 296 K) than water (κ = 0.61 W/m·K, 300 K) [ 25 , 26 ]. Thus, fish tank walls with a TH form a planar uniform thermal surface, while a hot region is produced around an AH glass housing.
Conclusions Local heating produced by the conventional aquarium heater causes water temperature variation in a fish tank, which can induce thermal stress in animals including pet fish. Therefore, plane heating in a fish tank is shown here using a transparent heater film that allows for visually attractive integration with the fish tank wall, which does not compromise view of fish in the tank. The transparent heater film is based on a metal mesh with a transmittance of 81 %, sheet resistance of 0.6 Ω/□, and mean temperature of 57 °C in air with 20 W. Increasing the water temperature from 23 °C to 24 °C at the center of the fish tank takes 28 min with a transparent heater film operating at 33 °C, while it takes 50 min with an aquarium heater at 49 °C (both with 100 W). Thermal images reveal that local heating with an aquarium heater causes an intense thermal gradient, whereas plane heating with a transparent heater film produces a uniform temperature field. The heat transfer enhancement with the transparent film heater relative to the standard aquarium heater is estimated to be 6:1 for thermal conduction and 4.6:1 for thermal convection. These enhancements are attributed to the extended heating surface area of the transparent heater film. Plane heating with the transparent heater film enhances heat diffusion and reduces water temperature variation, which is beneficial to increase the survival of sensitive aquatic species. An extension of this work would be to modify the film heater into an immersible plane heater to further increase the thermal efficiency.
The water temperature in a fish tank is important for fish health. A conventional aquarium heater produces localized heating that causes water temperature variation, resulting in thermal stress to fish. This study presents plane heating with a transparent heater film that is aesthetically attractive when applied to fish tanks. The transparent heater film comprises a metal mesh with an optical transparency of 81 %, sheet resistance of 0.6 Ω/□, and mean heating surface temperature of 57 °C at 20 W. In the test setup, 100 W is applied to compare an aquarium heater and a transparent heater film. Increasing the water temperature from 23 °C to 24 °C at the center of the fish tank needs 28 min with the transparent heater film operating at 33 °C, whereas the same temperature increase needs 50 min with an aquarium heater operating at 49 °C. The planar heater thus results in enhanced heat diffusion and reduced water temperature variation due to its extended heating surface area. Graphical abstract Keywords
Simulation The simulation is performed using the energy equation model in ANSYS-Fluent [ 27 ]. The setup model is as follows: water at 23 °C in a glass container, an AH modeled as a glass cylinder at 50 °C, a TH modeled by a metal sheet at 35 °C, and dimensions as described in Section 2 . Material properties are provided by the ANSYS library. The heat transfer coefficient through the floor wall is 1000 W/m 2 K, the remaining walls are adiabatic, and the temperature of the surrounding medium is 23 °C. The simulation results of the top view show local heating with AH compared to plane heating with TH after 3 h, as displayed in Fig. 9 (a) and (b). Numerical studies of water-based fluids with a plane heater and a rectangular container show evenly proportioned heat flow streamlines, which are consistent with the result with a TH [ 28 , 29 ]. Additional information No additional information is available for this paper. CRediT authorship contribution statement Gustavo Panama: Writing – original draft, Visualization, Validation, Software, Investigation, Formal analysis. Juntae Jin: Methodology, Conceptualization. Dong Jin Kim: Methodology, Conceptualization. Seung S. Lee: Writing – review & editing, Supervision, Project administration, Funding acquisition. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments This research was funded by the 10.13039/501100003725 National Research Foundation of Korea , Grant # NRF-N01210648. We would like to acknowledge the technical support from ANSYS Korea.
CC BY
no
2024-01-16 23:43:51
Heliyon. 2024 Jan 6; 10(1):e24066
oa_package/e4/57/PMC10788796.tar.gz
PMC10788803
38226006
Introduction Psychological symptoms that cancer patients may experience vary from concerns, worries, sense of uncertainty, sadness and feelings of hopelessness, to specific psychiatric anxiety and depressive disorders ( American Psychiatric Association & Association, 2013 ). Anxiety and depression are two of the most common psychological conditions, and fear of cancer progression (FCR) in cancer patients during active treatment and fear of cancer recurrence (FCR) in cancer survivors are common cancer-specific anxiety-related conditions ( Grassi et al., 2023 ). In previous studies, experiencing anxiety, depression or FCR was associated with a diminished health-related quality of life (HRQoL), higher symptom burden (e.g., pain or nausea), poor treatment adherence, poorer prognosis and higher mortality ( Arrieta et al., 2013 ; Carbajal-López et al., 2022 ; X. Wang et al., 2020 , 2020b ). Gastrointestinal stromal tumor (GIST) patients also suffer from psychological symptoms, they emphasized experiencing fear of disease progression and resistance to treatment, fear of death, scan-related anxiety, and changes in mood and emotions including feeling down, depressed, and easily becoming emotional ( de Wal et al., 2023 ; Fauske et al., 2020 ; Macdonald et al., 2012 ; van de Wal et al., 2023 ). GIST is a rare cancer that can arise anywhere along the gastrointestinal tract, affecting 8 per million persons per year ( van der Graaf et al., 2018 ). Surgical resection is the cornerstone of treatment for localized GIST, combined with (neo)adjuvant imatinib in patients with locally advanced, sometimes large tumors at diagnosis, or at high risk of recurrence after their resection ( Casali et al., 2022 ). In one in five patients the GIST has already metastasized to the peritoneum or liver at diagnosis ( van der Graaf et al., 2018 ), these patients often depend on life-long treatment with tyrosine kinase inhibitors (TKIs), of which imatinib is the first line. Imatinib significantly improved the median survival of metastatic GIST patients from 12 to 68 months ( Mohammadi et al., 2023 ). After failure of imatinib, sunitinib, regorafenib and ripretinib are currently registered ( Casali et al., 2022 ). Despite these improvements in survival, most of the patients with metastatic GIST will eventually succumb to their disease ( Blanke et al., 2008 ; Casali et al., 2017 ). For these patients the fear of disease progression, also described as the sword of Damocles, is undeniably a challenge ( Custers et al., 2015 ). A Dutch study that assessed FCR in patients with localized or metastatic GIST, reported that half of the patients experienced severe fear resulting in more general and cancer-specific psychological distress compared to patients with low fear ( Custers et al., 2015 ). Up to now, most studies in GIST patients concerned patients in a metastatic setting, where it is more to be expected that patients experience fear and anxiety as they regularly undergo scans on which disease progression might be detected. However, the majority of GIST patients is treated in a curative setting, where surgery alone is curative in half of the patients, and 5-year relapse free survival rates are reaching 63–70% without or with adjuvant imatinib, respectively ( Casali et al., 2015 ). In this group, it can be hypothesized that psychological symptoms (i.e., FCR, anxiety and depression) are less compared to patients treated in a palliative setting because they have a high chance of being cured and therefore the perspective of living a GIST-free life. Furthermore, imatinib itself might also result in psychological side effects, such as anxiety, being easily emotional or depression, since patients described this as being related to their imatinib treatment in qualitative studies ( de Wal et al., 2023 ; Fauske et al., 2020 ; van de Wal et al., 2023 ), yet this has never been reported in larger quantitative studies. Therefore, the aims of this study were to (1) investigate the prevalence of anxiety, depression and severe FCR in GIST patients treated in a curative and palliative setting, (2) compare the prevalence of anxiety and depression with an age- and sex-matched norm population, (3) identify sociodemographic, clinical and psychological factors associated with anxiety, depression and severe FCR, and (4) study the impact of these psychological symptoms on health-related quality of life (HRQoL).
Methods Study design & data collection Data of the cross-sectional ‘Life with GIST’ study was used, which was approved by the medical ethical committee of the Radboud University Medical Center (2019-5888). The study design and data collection were described previously ( van de Wal et al., 2023 ). In summary, this study was conducted among patients registered in the Netherlands Cancer Registry (NCR), diagnosed with GIST between 2008 and 2018, and treated within one of the five GIST reference centres. All patients provided informed consent, including permission to link their study data to data from the NCR. Data collection took place from September 2020 through June 2021 in the Patient-Reported Outcomes Following Initial treatment and Long-term Evaluation of Survivorship (PROFILES) registry ( van de Poll-Franse et al., 2011 ). Sociodemographic and clinical characteristics Patients self-reported sociodemographic (age, marital status, educational level) and clinical characteristics (co-morbidities via the Self-administered Co-morbidity Questionnaire ( Sangha et al., 2003 ), tumor localization, treatment phase, and type of treatment). Additional (gender and socioeconomic status) and missing data were derived from the NCR database, if available. Psychological distress, anxiety and depression The Hospital Anxiety and Depression Scale (HADS) ( Olssøn et al., 2005 ) is a 14-item scale that was used to assess psychological distress, consisting of seven items on anxiety and seven items on depression. Each item was scored on a Likert scale ranging from 0 to 3. Patients’ symptoms were classified as ‘present’ (>11), ‘mild’ (8–10) or ‘no symptoms’ (0–7), for both subscales ( Olssøn et al., 2005 ). To compare the HADS data of our study sample to a norm population, HADS data of an age- and sex-matched normative sample without cancer was obtained from CentERdata, using a household panel representative of the population in the Netherlands. The panel members were randomly matched based on sex and age at the time of questionnaire completion. A total of 873 panel members were matched to 328 GIST patients (ratio 1:2.7). Cancer-related concerns The Cancer Worry Scale (CWS) ( Zebrack et al., 2006 ) is a 8-item scale to identify FCR, this scale was first validated for cancer survivors, but later also for GIST patients ( Custers et al., 2015 ). Items were scored on a four-point Likert scale ranging from 1 to 4, scores were added up to calculate a total score, after which patients were classified as having ‘low fear’ (≤ 14) or ‘severe fear’ (≥ 14) ( Custers et al., 2014 ). Because the CWS only addresses future recurrence and surgery, we added three GIST-specific items of own design to assess concerns of the need for TKIs in the future, dying from GIST in the near future and in the long term future. These items were rated on a four-point Likert scale as well, and patients were classified as either having concerns ‘yes’ (2–4) or ‘no’ (1). HRQoL HRQoL was assessed by the European Organization for Research and Treatment for Cancer Quality of Life Questionnaire C30 version 3.0 (EORTC QLQ-C30) ( Aaronson et al., 1993 ), which consists of 30 items assessing physical, role, cognitive, emotional, and social functioning, the financial impact, global quality of life, and specific symptoms (fatigue, nausea and vomiting, pain, dyspnea, insomnia, appetite loss, constipation, diarrhea). All items were scored on a 4-point Likert scale, except the items regarding global health and quality of life, which were scored from 1 (very poor) to 7 (excellent). Next, a linear transformation was conducted to standardize the raw scores of the scales, hence scores ranged from 0 to 100. Higher scores indicate a better global quality of life and functioning, whereas a higher symptom score indicates a higher symptom burden ( Aaronson et al., 1993 ). Statistical analyses All statistical analyses were performed using SPSS Statistics (IBM Corporation, version 29.0, Armonk, NY, USA). Two-sided p -values of <0.05 were considered statistically significant. Categorical data were described as frequencies and percentages, continuous data were described as mean and standard deviation (SD). Chi-square tests and independent t -test were conducted to compare sociodemographic and clinical characteristics, anxiety and depression scores, FCR and GIST-related concerns among GIST patients in a curative and palliative treatment setting. To compare anxiety and depression scores of GIST patients with the age- and sex-matched norm population, chi-square tests and ANOVA tests with post hoc Bonferroni were performed. To study the relationship between HRQoL and the outcomes FCR, anxiety and depression, we performed independent sample t -tests and ANOVA tests with a post-hoc Bonferroni correction. Three separate multivariable logistic regression analyses were conducted to examine the association between the outcomes severe FCR, symptoms of anxiety and symptoms of depression, and all variables (i.e., sociodemographic, clinical, psychological distress, cancer-related concern) with a p -value of <0.1 in the univariate logistic regression analysis. We then performed a backwards selection, removing the least significant variable from the model until all p -values were <0.1. Before each multivariable regression analysis, the included variables were checked for multi-collinearity using the variance inflation factors and variance proportions test.
Results In total, 521 GIST patients were invited to participate, of whom 328 (63%) responded. Our study population consisted of slightly more males (53%), had a mean age of 66.7 years at moment of completing the survey, and were on average 5.9 years after diagnosis ( Table 1 ). Of the 328 patients, 260 (79.3%) patients were treated in a curative setting of whom 46 (17.7%) were currently on TKIs, and 68 (20.7%) patients were treated in a palliative setting of whom 67 were currently on TKIs. Groups did not differ significantly regarding sociodemographic characteristics. As expected, in the curative setting a significant higher percentage of patients received surgery (97.7% vs 64.7%, p = < 0.001), and in the palliative setting significantly more patients received TKI treatment (98.5% vs 55.8%, p = < 0.001). Prevalence of anxiety, depression and severe fear Of all GIST patients, 15% reported symptoms of anxiety, 13% symptoms of depression, and 43% had severe FCR. Severe FCR also occurred in patients that were not anxious or depressed, with 35.7% of these 253 patients reporting severe FCR. In comparison to patients in the curative setting, a significantly higher percentage of patients in the palliative setting reported symptoms of depression (26.5% vs 9.6%, p = .002), had severe FCR (73.4% vs 36.0%, p = < 0.001), and more often concerns about dying from GIST both in the near future (75.0% vs 28.9%, p = < 0.001) and in the long term future (84.4% vs 44.1%, p = < 0.001). In addition, patients in the palliative setting scored significantly higher on psychological distress ( M = 10.1 vs M = 6.0, p = < 0.001), and had significant higher scores on the subscales anxiety and depression. An overview of the outcomes of the HADS, CWS and GIST-specific concerns is presented in Table 2 . Comparison with the norm population Total psychological distress, anxiety, and depression scores of the GIST patients in the curative setting were comparable to the norm population, whereas GIST patients in the palliative setting scored significant higher on all three scales ( Table 3 ). Furthermore, the percentage of GIST patients that experienced (mild) anxiety symptoms was higher, especially for those in the palliative setting, but not significantly higher. However, there was a statistically significant difference for depression symptoms, were a significant higher percentage of GIST patients in the palliative setting experienced mild symptoms. Factors associated with anxiety, depression and severe fear For our uni- (Supplementary Table A1-A3) and multivariable ( Table 4 ) logistic regression analysis the total GIST population was analyzed. Experiencing symptoms of depression (OR 19.7; 95% CI 8.2–47.2; p = <0.001), severe FCR (OR 2.9; 95% CI 1.2–6.9; p = .016) and having concerns about the need for TKI treatment in the future (OR 2.5; 95% CI 1.1–5.6; p = .031) were associated with higher odds of experiencing anxiety symptoms. While being currently on TKIs (OR 3.2; 95% CI 1.3–7.5; p = .009) and having symptoms of anxiety (OR 24.2; 95% CI 10.3–56.9, p = < 0.001) were associated with higher odds of experiencing symptoms of depression. Being female (OR 2.2; 95% CI 1.2–4.1; p = .012), receiving treatment in a palliative setting (OR 2.3; 95% CI 1.2–5.1; p = .032), experiencing symptoms of anxiety (OR 3.8; 95% CI 1.5–9.9; p = .006), having concerns about the need for TKI treatment in the future (OR 2.4; 95% CI 1.3–4.4; p = .008), dying from GIST in the near future (OR 4.4; 95% CI 2.1–9.3; p = <0.001) and in the long term future (OR 3.0; 95% CI 1.5–6.4; p = .003) were associated with higher odds of having severe FCR, whereas older age (OR 0.95; 95% CI 0.92–0.98; p = .001) was associated with lower odds of experiencing severe FCR. Impact on HRQoL Experiencing severe FCR resulted in a significant impaired global QoL and physical, role, emotional, cognitive and social functioning, when compared to patients experiencing low FCR ( Fig. 1 ). GIST patients with severe FCR also reported significantly more symptoms of fatigue, pain, dyspnea, insomnia, loss of appetite, nausea and vomiting, diarrhea, and financial difficulties in comparison to patients with low fear (Supplementary table B). Compared to patients with no symptoms of anxiety, having mild symptoms led to a significantly impaired global QoL and functioning on all scales, and when having present symptoms, global QoL and functioning were even more impaired. Patients with mild and present symptoms of anxiety had significant higher scores on the symptom scales fatigue, nausea and vomiting, pain, insomnia, and loss of appetite indicating a higher symptom burden. Besides, they experienced significantly more financial difficulties. For patients with mild and present symptoms of depression, a similar pattern resulting in even more impaired global QoL and functioning scores was found, when compared to patients with no symptoms of depression. In patients with mild and present symptoms of depression a higher symptom burden for fatigue, pain, dyspnea, insomnia, diarrhea, and financial difficulties was reported.
Discussion In this cross-sectional study we investigated psychological symptoms of anxiety, depression and FCR among GIST patients. Of the 328 patients, 15% reported anxiety symptoms, 13% depression symptoms, and 43% had severe FCR. Significantly more GIST patients in the palliative setting suffered from these psychological symptoms, as in this group 22% had anxiety symptoms, 26% symptoms of depression, and 72% severe FCR, respectively. In comparison to the norm population, anxiety and depression levels were comparable between the norm population and patients in the curative setting, but significantly higher for patients in the palliative setting. Several studies that assessed anxiety and depression in large samples of cancer patients with various types of cancer and in different stages, reported similar prevalence rates of 12% to 25% ( Brintzenhofe-Szoc et al., 2009 ; Linden et al., 2012 ). Few studies investigated anxiety and depression in GIST patients. In a study among Mexican GIST patients, 31% of the 89 patients experienced psychological distress, which was associated with higher levels of fatigue, lower quality of life and functioning ( Carbajal-López et al., 2022 ). In a German study, 22% of the GIST patients experienced anxiety or depression ( Eichler et al., 2022 ). Moreover, in this study GIST and other sarcoma patients were analyzed together, showing disabled persons, patients in precarious employment, newly diagnosed patients and those with progressive disease should be considered as vulnerable groups for developing anxiety or depression. In our study, having symptoms of depression, concerns and fears were associated with higher odds of anxiety symptoms, suggesting that these psychological symptoms often occur together as clusters. For depression symptoms, besides symptoms of anxiety, being currently on TKIs was associated with higher odds. That current treatment with TKIs contributes to depression symptoms could depend on multiple factors. It could be a direct side effect of imatinib, or a result arising from all experienced side effects, or due to the greater doubts and uncertainties that patients on TKIs experience (e.g., will the treatment be effective, and more in a palliative setting, for how long will treatment be effective). Depression as a direct side effect of imatinib is not described in the literature over the past years, while we have over 20 years of experience with imatinib nowadays. A more likely explanation is that depression symptoms are consequences arising from all experienced side effects of imatinib, and that there is an overlap in the items of the HADS regarding depression symptoms and these consequences. Although the HADS items are about feelings and moods, they are also about looking forward and enjoying social activities or hobbies ( Olssøn et al., 2005 ), this can be influenced by side effects of imatinib and other TKIs, as was described previously ( Fauske et al., 2020 ; van de Wal et al., 2023 ). In general, imatinib is described as tolerable compared to other systemic therapies, such as chemotherapy. However, this has to be placed into a broader perspective. Chemotherapy results in acute and short term side effects, while TKIs result in less severe, but daily and long-lasting side effects. In particular patients in the palliative treatment setting are depending on TKIs and have to continue treatment, and therefore have to cope with these side effects every day. The continuous fatigue and unexpected diarrhea, both common side effects of TKIs ( van de Wal et al., 2022 ), can make patients less able to enjoy social activities, or more worried when they go out, especially if patients compare this situation to before their treatment. Severe FCR in GIST patients was a common psychological symptom, present in almost three-fourth of the palliative patients, but also in one third of the curative patients. The prevalence of 43% overall was lower compared to that of the study of Custers et al. (2015) , where 52% of the patients with localized or metastatic GIST reported severe FCR. This difference is possibly explained by the fact that less patients were on current TKI treatment (34%) in our study compared to the study of Custers et al. (61%), and part of our sample was considered cured without being in follow-up any longer. As frequent CT scans and follow-up consultations represent a constant reminder of the cancer and risk of recurrence or progression, these patients are less exposed to these triggers and therefore experience less FCR ( Bui et al., 2022 ; Custers et al., 2021 ). Other studies that used the CWS to asses FCR reported severe FCR in 31% of the breast cancer patients ( Custers et al., 2015 ), 35% of the prostate cancer patients ( van de Wal et al., 2017 ), 38% of the colorectal cancer patients ( Custers et al., 2016 ), and 45% of the young sarcoma survivors ( Pellegrini et al., 2022 ). Previous studies reported that FCR can be found for all time periods since the cancer diagnosis, but that females, higher number of comorbidities and multimodal treatment were associated with a higher risk, whereas aged decreased this risk ( Luigjes-Huizer et al., 2022 ; Pellegrini et al., 2022 ). This was partly in line with our study, were being female and receiving treatment in a palliative setting were associated with higher odds of severe FCR, and older age with lower odds. Our findings regarding diminished quality of life and functioning, and higher symptom burden in patients with psychological symptoms were consistent with previous studies ( Arrieta et al., 2013 ; Carbajal-López et al., 2022 ). Fatigue was one of the symptoms that was more severely present among patients with psycholocial symptoms as they reported significant higher scores of fatigue than patients without psychological symptoms. Fatigue and its impact were also studied in a Dutch sample of GIST patients. In this sample, 30% of the GIST patients suffered from severe fatigue resulting in higher levels of psychological distress, and impaired quality of life and functioning ( Poort et al., 2016 ). It remains unclear if the higher symptom burden is a result of the psychological symptoms, considering that depression and severe FCR were associated with current TKI treatment and a palliative treatment setting, therefore the higher symptom burden can also be a result of the TKI treatment or GIST itself. Considering the significant impact of psychological symptoms on HRQoL, the HRQoL of patients can be improved if psychological symptoms are recognized and the follow-up steps are clear. Psychological symptoms are sometimes difficult to identify for surgeons and oncologists, and at the same time there is a barrier in referring patients to psychological or psychiatric care ( Keller et al., 2004 ; Passik et al., 1998 ). This was underlined by a study in which 73% of the cancer patients remained untreated for their depression, merely 24% received an antidepressant and only 5% were seen by a mental health specialists ( Walker et al., 2014 ). This year a European Society for Medical Oncology (ESMO) clinical practice guideline for anxiety and depression in adult cancer patients was published ( Grassi et al., 2023 ). It is recommend to regularly screen for psychological symptoms. In case of GIST, both the HADS and CWS can be used. However, these tools are not sufficient to diagnose anxiety or depression disorders. Thus, when scored above the cut-off value, clinicians and oncology nurses should follow up on this and refer patients for a more formal assessment by a trained expert in psychology, to determine if specialized help or psychological treatment is required. If indicated, patients could benefit from psychoeducation, supportive therapy, cognitive-behavioral therapy, relaxation training, mindfulness-based therapy, or treatment with antidepressants ( Carbajal-López et al., 2022 ; Grassi et al., 2023 ; Lyu et al., 2022 ). As around half of the patients declines specialized help, and only one in four patients accepts referrals to psychological care ( Tondorf et al., 2018 ), there is still a lot to be gained. What clinicians and oncology nurses could do is motivate patients to participate in screening and to be referred if indicated, and on the other hand try to reduce possible triggers for psychological symptoms. For instance, reduce the frequency of CT or MRI scans in GIST patients with a long-time stable disease, so patients suffer less from scan-related anxiety, fears and emotional distress before and after these evaluation moments. To the best of our knowledge, this is the largest sample of GIST patients in which psychological symptoms were studied. Our study population was diverse, including GIST patients on TKIs in a palliative and curative treatment setting, but also patients that survived GIST, with some not even being in follow up any longer. This resulted in a representative cohort of GIST patients in different stages of treatment and follow-up, which made it possible to draw conclusions for the total group of GIST patients, but also possible to analyze subgroups that are more prone for psychological symptoms such as patients in a palliative setting. Our study had several limitations. First, the cross-sectional design limited us to study causalities and changes over time. Second, this was a multicenter study conducted in the Netherlands, therefore only Dutch GIST patients were included, which could impede the generalizability. Last, since reasons for not participating in this study were not collected, and could be due to either poor (mental) health or absence of symptoms, there could be some selection bias.
Conclusion In conclusion, the prevalence of anxiety and depression symptoms in palliative treated GIST patients is higher compared to the norm population, while the prevalence in curatively treated patients was comparable with the norm population. Given the relatively high prevalence of psychological symptoms and their considerable impact on the patients’ HRQoL, particularly in palliative GIST patients, this deserves more attention in clinical practice. Through regular screening, these symptoms can be recognized and patients can be offered appropriate interventions.
Shared last authorship. Background This study aims to (1) investigate the prevalence of anxiety, depression and severe fear of cancer recurrence or progression in gastrointestinal stromal tumor (GIST) patients treated in a curative or palliative setting, (2) compare their prevalence with a norm population, (3) identify factors associated with anxiety, depression and severe fear, and (4) study the impact of these psychological symptoms on health-related quality of life (HRQoL). Methods In a cross-sectional study, GIST patients completed the Hospital Anxiety and Depression Scale, Cancer Worry Scale, and EORTC QLQ-C30. Results Of the 328 patients, 15% reported anxiety, 13% depression, and 43% had severe fear. Anxiety and depression levels were comparable between the norm population and patients in the curative setting, but significantly higher for patients in the palliative setting. Having other psychological symptoms was associated with anxiety, while current TKI treatment and anxiety were associated with depression. Severe fear was associated with age, female sex, palliative treatment setting, anxiety, and GIST-related concerns. Conclusion GIST patients treated in a palliative setting are more prone to experience psychological symptoms, which can significantly impair their HRQoL. These symptoms deserve more attention in clinical practice, in which regular screening can be helpful, and appropriate interventions should be offered. Keywords
Funding This study was partly funded by research grant from 10.13039/100004336 Novartis (grant 006.18). The funder had no role in the design and conduct of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Author contributions I.D, O.H and W.G contributed to the study conception, design and methodology. Data curation was done by D.H, I.D, H.G, A.O, A.R, N.S, O.H and W.G. Formal analysis were performed by D.W, under the supervision of O.H and W.G. Funding acquisition was done by I.D., O.H. and W.G. The visualization and original draft of the manuscript was written by D.W. All authors reviewed and edited the draft version of the manuscript, and approved the final manuscript. Ethics approval This study was performed in line with the principles of the Declaration of Helsinki. Ethical approval was obtained from the medical ethical committee of the Radboud University Medical Center (2019-5888). According to the medical ethical regulations, approval of one ethical committee for survey research is valid for all participating centres. Consent to participate Written informed consent was obtained from all individual patients included in the study. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Supplementary materials Acknowledgments The authors would like to thank the patient advocacy group ‘contact group GIST’ for their help in the ‘Life with GIST’ study, and Carla Fokkema-Vlooswijk and Esther Derksen-Peters for their support with the recruitment and data collection through PROFILES.
CC BY
no
2024-01-16 23:43:51
Int J Clin Health Psychol. 2024 Jan 9 Jan-Mar; 24(1):100434
oa_package/2f/96/PMC10788803.tar.gz
PMC10788806
38226272
Introduction In recent years, there has been an increasing awareness and concern about the environmental effect of traditional food packaging materials across the world [ 1 , 2 ]. The hunt for environmentally acceptable and sustainable alternatives has resulted in an increase in research and innovation in the field of biomaterials for food packaging [ 3 ]. This in-depth examination digs into new trends in biomaterials for sustainable food packaging, putting light on the varied range of materials, technology, and market dynamics that are influencing the future of this crucial sector. Exploration of diverse biomaterial categories that have enormous promise for sustainable food packaging is one of the key emphasis topics. Biodegradable polymers, which disintegrate spontaneously over time, have received a lot of interest because of their potential to minimize plastic waste [ 4 ]. Ceramics are also being investigated as feasible options for food packaging applications because to their natural durability and resilience to environmental conditions [ 5 ]. Composites, which blend several biomaterials to exploit their distinct features, provide a diverse approach to packaging solutions [ 6 ]. Furthermore, the use of metals and alloys in food packaging is gaining popularity since they provide strong protection and can be recycled easily [ 7 ]. Nanotechnology, an innovative field of study, is making considerable inroads into the packaging business [ 8 ]. This paper delves deeply into nanotechnology in packaging, emphasizing its potential to increase biomaterial performance, improve barrier qualities, and lengthen food product shelf life. Edible films and coatings, a subset of nanotechnology, represent an intriguing new direction in sustainable packaging, providing novel solutions that not only preserve food but can also be consumed, minimizing packaging waste [ 9 , 10 ]. To present a complete picture, this review digs into worldwide market trends in biomaterials for sustainable food packaging. It investigates the growth trajectories of various biomaterials and technologies, highlighting the places where these breakthroughs are gaining popularity. It also looks at consumer preferences and regulatory frameworks that are altering the packaging environment. Fig. 1 illustrates the advantages of using biomaterials for food packaging. It is critical to recognize that the move to sustainable food packaging is not without difficulties. The difficulties faced by academics, creators, and policymakers, such as monetary issues, technical constraints, and the requirement for standardized testing and certification. Finally, it provides a glimpse into the field's potential future paths by picturing a situation in which biomaterials play a significant role in minimizing the environmental impact of food packaging. Researchers, businesspeople, and legislators who wish to comprehend the dynamic and evolving world of biomaterials for sustainable food packaging will find this in-depth examination to be a helpful resource. By studying the many types of biomaterials, nanotechnology applications, waste-derived solutions, global market trends, and the challenges and opportunities that lie ahead, it gives a thorough insight on one significant part of the sustainable packaging revolution [ 11 ]. The pursuit of sustainable alternatives is being driven by the growing global concern over the environmental impact of existing food packaging materials. Biomaterials are the main emphasis, which is encouraging further research and development in the field of food packaging [ 12 ]. This topic includes biodegradable polymers, ceramics, composites, metals/alloys, and other biomaterial categories with intriguing future applications. Interestingly, nanotechnology becomes a key player, increasing the performance of biomaterials, strengthening their barrier properties, and prolonging the shelf life of food products. The talk explores edible coatings and films as fascinating applications of nanotechnology that provide ways to cut down on packaging waste while preserving food. The analysis goes beyond materials to look into global biomaterials industry trends for environmentally friendly food packaging. The development paths of different biomaterials and technologies are examined, paying particular emphasis to how consumer preferences and legal frameworks influence the packaging industry. Fig. 1 provides a graphic representation of the benefits of using biomaterials in food packaging. The introduction notes that despite the clear advantages, there are still obstacles that academics, artists, and legislators must overcome. These obstacles include lack of funding, technical difficulties, and the requirement for certification and standardized testing. A preview of future situations where biomaterials greatly reduce the environmental impact of food packaging is provided in the conclusion. Researchers, industry experts, and legislators looking for insights into the dynamic and developing field of biomaterials for sustainable food packaging may find this thorough analysis to be a useful resource [ 13 ].
Conclusion Biomaterials research for sustainable food packaging is a viable area in resolving environmental challenges connected with traditional packaging materials. Metals and alloys, polymers, ceramics, and composite materials containing nanoparticles provide a variety of alternatives for improving packaging sustainability. These materials offer prospects to reduce dependency on nonrenewable resources, reduce pollution, and lower the packaging industry's ecological imprint. To fully exploit the promise of biomaterials in food packaging, it is critical to emphasize research and development while encouraging collaboration among scientists, companies, and legislators. Biodegradability, recyclability, and the use of renewable resources should be prioritized in sustainable packaging solutions. Furthermore, adding cutting-edge technologies like smart packaging and advanced processing processes can improve the performance and functionality of biomaterial-based packaging. It will be critical to educate stakeholders on the environmental benefits of biomaterials and incentivize their adoption through legislative frameworks or market-driven efforts. The packaging sector can make a substantial contribution to a more sustainable and ecologically friendly future by embracing these comprehensive initiatives [ 135 ].
This comprehensive review investigates a variety of creative approaches in the field of sustainable food packaging biomaterials in response to growing environmental concerns and the negative effects of traditional plastic packaging. The study carefully looks at new developments in biomaterials, such as biodegradable polymers, ceramics, composites, and metal alloys, in response to the growing need for environmentally suitable substitutes. It highlights how they might replace conventional plastic packaging and lessen environmental damage. Moreover, the incorporation of nanotechnology into packaging is closely examined due to its crucial function in improving barrier qualities, introducing antimicrobial properties, and introducing smart packaging features. The investigation includes edible coatings and films made of biodegradable polymers that offer new sensory experiences in addition to prolonging the shelf life of products. The review emphasizes the use of biomaterials derived from food processing and agricultural waste, supporting environmentally responsible methods of producing materials while simultaneously using less resources and waste. As a strong defense against plastic pollution, the report highlights the food industry's increasing use of recyclable and biodegradable packaging, which is in line with the concepts of the circular economy. A movement in consumer tastes and regulatory pressures toward sustainable food packaging is evident in global market patterns. Notwithstanding these encouraging trends, there are still issues to be resolved, including cost-effectiveness, technological constraints, and the scalability of biomaterial production. This thorough analysis concludes by highlighting the critical role biomaterials have played in guiding the food industry toward sustainability and emphasizing the need for ongoing research and development to adequately address environmental issues on a worldwide scale and satisfy the growing demand for environmentally friendly packaging options. Biomaterials show great promise as catalysts for the food industry's transition to a sustainable future. Keywords
Functions of biomaterials Biomaterials have become essential in many areas of business and healthcare, and food packaging is one such area where they have had a big influence [ 14 ]. Biomaterials are the best choice for assuring the safety and quality of food items along the whole supply chain due to their wide range of features [ 15 ]. Different biomaterials are used in food packaging to satisfy certain needs and each one has its own benefits as shown in Fig. 2 . Amazingly, metals and alloys have been used in food packaging, where their strength and longevity assure the safety of consumables, orthopedic screws, dental implants, and even in food [ 16 ]. On the other hand, polymers are adaptable biomaterials utilized for prosthetic skin, medicine delivery systems, and food packaging due to their light weight and flexibility [ 17 ]. Ceramics are used to protect food goods and have been used in bone replacements, heart valves, and joint replacements because to their biocompatibility. Composites, which incorporate several biomaterials, have also been successfully used in biosensors and microelectrodes, highlighting its relevance in improving food packaging solutions [ 18 ]. This broad range of biomaterials serves as an example of the creative steps being taken to develop food packaging technology in order to create a safer and more environmentally friendly future. Below is a list of several biomaterial kinds. Biodegradable polymers The increased concern about plastic pollution and environmental sustainability has led to the emergence of biodegradable polymers as a viable biomaterial for food packaging applications. These polymers have a number of benefits and are frequently made from renewable resources like corn starch, potatoes, or sugarcane. When disposed of, they gradually decompose into harmless components, decreasing landfill waste while effectively shielding food goods from environmental variables like moisture and oxygen. Food packaging that decomposes also satisfies rising customer demand for environmentally friendly options while extending the shelf life of food. But there are still issues, such achieving the appropriate amount of barrier characteristics and cost-effectiveness, scalability, and scaleability [ 19 ]. Their natural capacity to decompose reduces the negative effects on the environment and addresses issues related to plastic waste. However, there are still issues that need to be resolved through continued study, such as inferior mechanical strength and barrier qualities when compared to traditional plastics. Strengthening formulations can be achieved by mixing or adding reinforcing ingredients. Optimizing processing methods can also lessen these restrictions. In the search for environmentally friendly and sustainable food packaging solutions, biodegradable polymers present a potential material since they strike a compromise between biodegradability and performance [ 20 ]. Ceramics Ceramics have been known as a suitable biomaterial for food packaging, due to their distinctive mix of characteristics. Since these substances are chemically inert and do not interact with food, the flavor and quality of the meal are preserved. Additionally, ceramics have remarkable temperature stability, making them appropriate for both hot and cold food items. Their strong mechanical strength and resistance to damage help make packing more durable and lower the danger of infection. Additionally, ceramics are non-porous, limiting the transport of moisture or gases, extending the shelf life of perishable goods. Because of this, ceramics are becoming a more appealing alternative when looking for sustainable and food-safe packaging solutions [ 21 ]. These materials provide strong packaging options because of their remarkable strength and rigidity. They are usually made of oxides, nitrides, or carbides. Because of their exceptional heat stability, ceramics guarantee that food quality will not be compromised while being stored or transported. Their brittleness, however, can be a drawback, requiring cautious handling and design considerations to prevent breaking. Notwithstanding this disadvantage, ceramics' natural strength makes them a good choice for applications involving protective packaging. Using ceramics to its full potential in environmentally friendly food packaging is consistent with the overarching objective of lessening the impact on the environment and encouraging eco-friendly substitutes [ 22 ]. Composites Due to their distinctive mix of qualities, composites are being used more and more as biomaterials in the field of food packaging. These materials generally consist of a matrix, which is frequently a biodegradable polymer like PLA (polylactic acid), reinforced with natural fibers like cellulose or nanomaterials like graphene. This interaction produces packaging materials that are not only sturdy and light in weight but also have good barrier characteristics that stop the passage of oxygen, moisture, and other pollutants, hence increasing the shelf life of food products. Additionally, because composites are frequently biodegradable and have a lower environmental effect than standard plastics, their usage in food packaging is in line with sustainability objectives. Therefore, the search for more efficient and environmentally friendly food packaging solutions points to composites as a viable direction [ 23 , 24 ]. When compared to conventional packaging, their advantages include better barrier qualities, flexibility, and a lower environmental effect. However, there are drawbacks, such as difficulties with recycling because of different material compositions. Furthermore, some composites might not be able to tolerate very high or low temperatures, which could affect which foods they can be used with. Optimizing composite materials for food packaging requires finding a balance between strength, environmental effect, and recyclability. This will ensure both sustainability and functionality in the ever-changing packaging solutions market. Metal and alloys Metal and alloys have carved out a space for themselves in the world of biomaterials for use in food packaging because of their distinct features. For instance, stainless steel is highly regarded for its ability to resist corrosion and durability, making it a perfect material for machinery used in food processing and packaging. On the other hand, because of their superior barrier qualities that guard against moisture, light, and oxygen, aluminum alloys are frequently used in the manufacturing of lightweight, recyclable food packaging containers, such as cans and foil. Due to their capacity to be recycled, these materials are crucial parts of contemporary food packaging solutions since they not only guarantee the preservation of food quality and safety but also support sustainability initiatives [ 25 , 26 ]. Understanding the various biomaterial types—such as metal alloys, ceramics, composites, and biodegradable polymers—in detail is essential to understanding the benefits and distinctive characteristics of using biomaterials for sustainable food packaging. For example, naturally disintegrating biodegradable polymers lessen plastic pollution and promote environmental sustainability [ 27 ]. Because of their strength and thermal stability, ceramics provide the best possible food preservation. Composites combine several materials to create synergistic effects that improve performance. Metal alloys offer sturdy substitutes because of their strength and malleability. Examining the role of nanotechnology reveals a transformative aspect. The incorporation of nanoparticles into biomaterials enhances their functions by introducing enhanced mechanical strength, barrier qualities, and antibacterial characteristics. The combination of biomaterials and nanotechnology increases the overall effectiveness of sustainable food packaging and opens up new avenues for research into environmentally friendly solutions [ 28 ]. As environmental worries over traditional plastic packaging grow, this review critically looks at sustainable alternatives. The environmental risks associated with conventional plastics have prompted a detailed investigation of various biomaterials, nanotechnology, and novel alternatives such as edible films. Making this change is essential to reducing the harm that plastic trash does to ecosystems. In order to protect the environment, the analysis emphasizes how urgent it is to embrace sustainable techniques and materials. A variety of biomaterials, such as biodegradable polymers, durable ceramics, and creative composites, demonstrate encouraging progress in minimizing environmental damage. 10.13039/100014337 Furthermore , the use of nanotechnology presents a paradigm change, supporting the functionalities of biomaterials with increased mechanical strength, improved barrier qualities, and improved antibacterial qualities. The investigation of edible films highlights the complex strategy for environmentally friendly food packaging even more. Nanomaterials The use of nanomaterials in food packaging has grown in popularity because of their special qualities, which improve packing efficiency and lengthen food product shelf life [ 29 ]. When compared to conventional packaging materials, nanocomposites—such as nanoclay and graphene-based materials—offer better mechanical strength, barrier qualities, and thermal stability. Because of their antibacterial qualities, silver nanoparticles lower the risk of foodborne infections by preventing the growth of bacteria and fungi. Because titanium dioxide nanoparticles may block UV rays, they can prevent food from deteriorating due to light. They also add to the features of enhanced barrier. An environmentally beneficial substitute is provided by nanocellulose, which is made from plant fibers and is renewable and biodegradable. Because of its excellent flexibility and tensile strength, it can be used in a variety of packaging applications. Although less prevalent, quantum dots are being investigated for their potential to indicate food freshness through color changes in intelligent packaging. Nanomaterial safety is still a source of concern despite its benefits. The goal of current research and regulatory frameworks is to mitigate any possible dangers related to the release of nanoparticles into food. The necessity for striking a compromise between improving packaging functionality and guaranteeing food safety for consumers is highlighted by the comparative examination of these nanomaterials [ 30 ]. Biomaterials are a wide class of materials with distinctive properties that are essential to the field of medical and biotechnological breakthroughs. The nature of their occurrence or origin, dimensional stability, contact with live body tissues, biodegradability, structural aspect, and use are some of the major factors that determine how they should be classified. Researchers and practitioners can build medical devices, implants, and therapeutic treatments by comprehending and classifying biomaterials according to these characteristics. This categorization system enables a thorough investigation of the characteristics and uses of biomaterials, fostering innovation in biotechnology and healthcare while guaranteeing the use of biomaterials in a safe and effective manner. Table 1 displays several biomaterials' categorization schemes. The categorization of biomaterials according to several parameters offers useful insights into their properties and uses. This categorization system is primarily intended for use in the medical and biotechnology fields, but it may also be used to the field of food packaging due to some shared material characteristics and interactions with living things [ 32 ]. The classification of biomaterials according to their chemical makeup emphasizes their variety, which includes ceramics, polymers, metals, and composites [ 33 ]. This diversity is especially important in the context of food packaging since diverse materials have unique benefits. For instance, metals like aluminum are favoured for their barrier qualities whereas polymers like polyethylene are frequently utilized because to their low weight and flexibility [ 34 ]. Food packaging materials are influenced by the classification of their type of occurrence or origin (natural, semisynthetic, synthetic). Although synthetic materials like plastics have become more popular due to their adaptability and affordability, natural materials like paper and cardboard have long been utilized. The same principles of dimensional stability that divide biomaterials into nano, micro, and macro forms may be used to classify the materials used in food packaging. The barrier qualities and shelf life of food items can be improved by using nanomaterials like nanoparticles or nanocomposites [ 35 ]. Food packaging may not directly relate to biomaterials' classification as resorbable, non-resorbable, bioactive, or bio-inert based on their interactions with live human tissues. The idea of bioactivity may be used to packing materials that work with food to keep it fresher longer or avoid spoiling, though. Packaging for food as well as biomaterials both benefit from the trait of biodegradability. In line with the rising need for environmentally friendly packaging solutions, biodegradable packaging materials like PLA (polylactic acid) provide eco-friendly alternatives to conventional plastics [ 36 ]. Porous or non-porous materials, structural elements, and other factors can all be utilized to food packaging [ 37 ]. Controlled gas exchange is possible using porous materials, which is essential for maintaining the quality of some food items. Applications for biomaterials in the medical industry include diagnostics, treatments, restoration, prevention, and regeneration. Similar to other types of packaging, food packaging fulfills a variety of purposes, including convenience, branding, and preservation. Though not directly applicable to food packaging, the classification's consideration of application sites (intra-corporeal and extra-corporeal) and contact times with body tissue (limited, prolonged, and permanent) highlights the significance of knowing the precise requirements and interactions of materials with their intended environment. In conclusion, the categorization of biomaterials may be used to the field of food packaging even if it was initially created for medical applications. This research shows how biomaterial categorization concepts may be used to choose packaging materials that are suitable for preserving and safeguarding food goods while taking consumer safety and environmental sustainability into account. The list of biomaterials as shown in Table 2 That might be used in food packaging includes both terrestrial and aquatic sources. Although potatoes, tomatoes, sugarcane, turmeric, jackfruit, maize, and other biodegradable plants present interesting alternatives, their availability and scaleability require evaluation. Citrus peels could contain natural antioxidants, however the effectiveness of the extraction method needs to be considered. Red seaweed has the capacity for plentiful growth and biodegradability, however its usefulness in packaging applications has to be investigated. Although promising for chitosan extraction, using crab shells extensively may provide difficulties. In the end, these biomaterials provide environmentally benign options, but their use in food packaging will depend on successful extraction, scalability, and affordability [ 38 ]. Natural materials have been used into a variety of food packaging technologies [ 55 ]. Packaging made of nanocellulose and sugarcane bagasse is sustainable and biodegradable [ 56 ]. Lycopene from tomatoes increases bioaccessibility in packaged foods and serves as a stabilizer. Antibacterial and antioxidant qualities are added to PLA packaging with turmeric, improving infection control [ 57 ]. ZnO encapsulation and jackfruit-based starch allow pH sensing capabilities. Packaging made of citrus pectin that also contains marjoram or clove oil has antibacterial properties and increases shelf life. As edible film coatings, carrageenan and alginate from red seaweed provide tastes, antibacterial properties, antioxidant properties, and colors. Excellent barriers, biodegradability, and biocompatibility are offered by chitin produced from crab shells and eCNF-based packaging [ 58 ]. Together, these developments propel practical and ecological food packaging solutions. Nanotechnology in packaging Nanotechnology has made great progress in transforming the realm of packaging by integrating biomaterials into its design and manufacturing processes [ 59 ]. This novel method combines the special qualities of nanoparticles with the biomaterials' sustainability and biocompatibility to provide packaging solutions that are both useful and ecologically responsible. The application of nanoscale materials including nanoparticles, nanocomposites, and nanofibers in packaging is a crucial component of nanotechnology [ 60 ]. These materials enhance the shelf life and safety of packaged goods by providing greater strength, barrier qualities, and antibacterial capabilities. Additionally, packaging that is smart and sensitive that can sense and react to environmental changes like changes in temperature or moisture may be made using nanoscale materials. Nanotechnology enables fine control over material characteristics and may be adjusted to satisfy specific packaging needs such as oxygen or moisture barrier requirements [ 61 ]. This level of personalization guarantees that packaging options are suited for the preservation and protection of varied items. The use of nanotechnology and biomaterials in packaging is a cutting-edge solution that solves both functional and environmental concerns [ 62 ]. It has the ability to minimize waste, improve product safety, and promote sustainable packaging solutions in a variety of sectors. Table 3 mentioned reflect a diverse spectrum of advancements in food packaging and storage technologies that make use of nanoparticles to improve performance. While these developments present exciting prospects, a rigorous study is required to assess their consequences. Products like Debbie Meyer® GreenBags and NanoSealTM coatings, on the other hand, demonstrate environmentally aware attempts to prevent food waste by increasing the shelf-life of perishables. Items impregnated with nanosilver, such as infant milk bottles and food containers, offer improved antibacterial characteristics, which will benefit food safety. Zeomic® silver zeolites packaging film and Agion® technology reveal novel approaches to using silver's antibacterial properties. However, there are some reservations. The long-term safety of nanoparticles for both human health and the environment is unknown. Because of the possibility of nanoparticle migration into food, there are concerns regarding their safety in direct contact applications. Furthermore, the environmental impact of nanoparticle disposal or release during product degradation should be taken into account. Furthermore, while increasing shelf life is beneficial, it may inadvertently encourage wasteful practices if consumers use it as a crutch rather than tackling bigger food sustainability concerns [ 63 ]. These nanotechnology-based packaging solutions have the potential to improve food safety and reduce waste, but they must be thoroughly evaluated in terms of long-term safety, environmental effect, and consumer behavior consequences. Striking a balance between innovation and accountability is critical as these goods gain commercial traction. Edible films and coatings A thin layer of edible substance that is generated as a protective coating on meals and may be ingested together with those items is known as an edible coating. Typically, the product is submerged in a film-forming solution created by the structural matrix before these layers are applied in liquid form to the food's surface. In nature, edible films are free-standing structures, whereas edible coatings cling to the surface of food [ 64 ]. In order to create a continuous framework of films or coatings, many bio-based polymers have been researched. The most prevalent class of biopolymers employed in the creation of edible materials are hydrocolloids, which include both polysaccharides and proteins. Sources for them include plants, animals, and microbes. The most widely used polysaccharides in the manufacture of edible films and coatings are cellulose derivatives, starches, alginates, pectins, chitosans, pullulan, and carrageenans, while the most widely used proteins are soybean proteins, wheat gluten, corn zein, sunflower proteins, gelatin, whey, casein, and keratin [ 65 ]. However, the nature of such substances is hydrophilic. As a result, various oils and fats are added to hydrocolloid matrix to improve their water vapour barrier qualities. Wax, triglycerides, acetylated monoglycerides, free fatty acids, and vegetable oils are the most often used [ 66 ]. Biopolymers have traditionally been used as one-component film or coating formulations, and this trend is still present today. But recently, a lot of research has been done on two- and multi-component edible polymers that offer better functional qualities. To create structures with altered physical, mechanical, and barrier qualities that are superior to the one-component material, composite films or coatings are generated in this context by combining two or more film-forming components. Thus, in film-forming formulations, a variety of compounds are utilized to enhance or change the material's fundamental functioning, such as plasticizers, crosslinking agents, emulsifiers, and reinforcements. To further enhance the quality, stability, and safety of packaged foods, various active chemicals, including antimicrobials, antioxidants, colorants, flavors, and nutraceuticals, are added to the film-forming solution. Additionally, such components could give edible material antibacterial, antifungal, or antioxidant capabilities [ 67 , 68 ]. Potential uses for innovative edible coatings are shown in Table 4 . There are several benefits to using different coating materials to preserve and improve the quality of various food products [ 87 ]. Strawberries with a yam starch coating have less deterioration, less weight loss, and more firmness, increasing their shelf life. Gum arabic is helpful in preventing fungal development in strawberries and tomatoes as well as reducing deterioration in Anna apples. When applied to sweet cherries, almond gum exhibits a variety of advantages, including a reduced rate of respiration, a reduction in ethylene generation, and a delay in the occurrence of alterations in a number of quality indicators, including color, weight loss, firmness, titratable acidity, and soluble solid concentration. Apricot gum with Satureja intermedia extract also reduces oxidative substances and fungal contamination in wild almond kernels. Gum arabic functions as an antifungal agent for bananas and papayas, reducing the growth of dangerous fungus when combined with lemongrass and cinnamon essential oil. Listeria monocytogenes growth in cold-smoked salmon is significantly reduced by the addition of potato peel waste and oregano essential oil. Cordia myxa gum prevents browning and increases the shelf life of artichoke bottoms. By maintaining the pH, acidity, fragrance, color, texture, and overall look of Kinnow mandarins, opuntia cactus polysaccharides extend their shelf life. Apples lose less weight and retain more firmness when treated with aloe vera gel, and guavas have a longer shelf life, more ascorbic acid, and fewer total sugars when treated with a mixture of Arabic gum, aloe vera, and garlic extract. In addition, when mixed with potato peel flour for acerolas, fruit and vegetable residue flour from diverse sources prevents weight loss and maintains the color of freshly cut carrots. With a variety of fruit and vegetable products, these coating materials exhibit their exceptional capacities to enhance food quality and shelf life, providing useful solutions for food preservation and waste reduction [ 69 ]. The idea of packaging has evolved in the modern era such that packaging systems may now include additional features like antioxidant activity, antimicrobial qualities, oxygen scavenging, and sensor presence. This means that traditional packaging is now known as active and/or intelligent packaging, some of which are edible films or coatings [ 88 , 89 ]. A film is typically thought of as a thin, independent solid sheet that is produced using at least one processing method, applied, and utilized to package or hold food items. Conversely, a liquid coating is directly applied to the surface of food products by brushing, sprinkling, or dipping techniques. The authors would want to make clear that, while film creation occurs in situ, coating the surface of food products, the term “film” is occasionally also used to refer to coatings. As Fig. 3 shows, the field of study known as “edible films/coatings” has grown tremendously in recent years. Using the online database SCOPUS, the search was restricted to the last 10 years and the keywords “edible,” “coatings,” and “films.” It is evident that there have been around five times as many publications on this subject as there were in 2012. Almost half of all published material in the past 10 years has been published if we limit our analysis to the previous few years (2020–2022). However, food has been preserved and given a longer shelf life for ages thanks to edible coatings and films. Wax or lard applied to fruits, vegetables, meat, and fish are a few examples [ 90 ]. Furthermore, investigators have recently investigated a few particular uses for edible films that are connected to their utilization as packaging solutions. These consist of gelatine and soybean polysaccharide soluble sachets for soups and beverages (to be solubilized in water) [ 91 ]. Using gelatine-pectin to make edible wrappers that reduce ricotta cheese's moisture content is another example [ 92 ]. It's also possible to use edible films in place of the original packaging for individual candies. Recyclable and compostable packaging Biomaterials-based recyclable and biodegradable packaging has become a viable alternative to the traditional single-use plastics that pose environmental problems [ 94 ]. These cutting-edge materials are biodegradable and less dependent on fossil fuels since they are made from renewable resources like algae, cellulose, or plant-based starches. By collecting and processing recyclable biomaterial packaging through current recycling systems, a circular economy may be created while reducing waste and saving resources. On the other hand, compostable biomaterials decompose naturally into organic matter, improving soil and lowering landfill trash [ 95 ]. In addition to lowering the carbon footprint of packaging, this strategy promotes a more responsible and eco-friendly approach to product packaging. But for implementation to be successful, there must be broad acceptance, greater consumer awareness, and upgrades to the infrastructure for data gathering and processing. Packaging made of biomaterials that is recyclable and compostable is a potential way to lessen the environmental effect of packaging materials, which is something that consumers and businesses are prioritizing more and more [ 96 ]. Table 5 illustrates numerous producers are providing a variety of environmentally acceptable substitutes for conventional plastics, which is quickly broadening the landscape of biodegradable and biobased polymers. While NatureFlex from Innovia Films shines in filmmaking, Mater-Bi, a starch-based product from Novamont, finds uses in loose fill, bags, films, trays, and wrap. Tenite from Eastman offers versatility in film applications, and Biograde from FKuR gives cellulosics a new facet. Strong competitors for PLA, a well-known biopolymer, including BASF's Ecovio, NatureWorks' Ingeo, and Cargill Dow's EcoPLA, with uses in rigid containers, films, and barrier coatings. The need for bottles, trays, and films is satisfied by Dupont's Biomax bio-based PET. As a bio-based substitute for rigid containers, film wrap, and barrier coatings, Braskem's Bio-PE is available. With uses in trays, films, and barrier coatings, Monsanto's Biopol and Biomer products are essential for PHA/PHB. This growing selection of bio-based polymers highlights the industry's dedication to sustainability and provides both producers and customers with a wide range of environmentally responsible options, supporting the worldwide initiative to minimize plastic pollution and environmental damage [ 97 ]. In commercial applications, starch-based packaging has become more popular, especially in the food packaging sector. Producers such as Paper foam, Bio4Pack, Novamont, and Plantic are leading the way in using this environmentally beneficial material. Transparent films from Novamont and net packaging from Bio4Pack are excellent choices for fruit and vegetable packing since they are sealable, have a great finish, and are durable. Complying with sustainable packaging norms, these materials are compostable and biodegradable (Kabasci, 2020; Molenveld et al., 2015) [ 98 , 99 ]. Plantic offers transparent, oxygen- and water-resistant barrier films that can be coated with glass or aluminum for use in the packaging of meat and fish. These components guarantee the quality and freshness of perishable goods. Plantic's and Amcor's film laminates, respectively, are certified for direct food contact and promote composting and biodegradation in cheese and dry food packaging. Egg cartons (Paperfoam), bread packaging (Biofutura), coffee capsules (Ethical Coffee Company), and candy containers (Cadbury) are examples of products made of starch. These applications exhibit adaptability, featuring features such easy recyclability, lightweight design, and smooth finish. Furthermore, both drinking straws (Moonen Natural) and hot drink plates/cups (Biome Bioplastics) are made of materials that can be heated to a high temperature while still adhering to strict food safety standards [ 100 ]. To summarise, starch-based packaging materials are advancing significantly in a range of commercial applications, offering a balance between sustainability, food safety regulations, and practicality. These developments support the current trend toward packaging options that are more responsible and ecologically friendly. Global market trends There has recently been a surge in articles in the literature on the reuse of agricultural and food waste, indicating a large market for high-benefit goods with high economic value. As a result, there is growing interest in agro-food waste as a source of bio-based materials with potential use in the packaging sector. The market for bioplastics is expected to reach USD 2.87 million in 2025, a 36 % increase from 2020, while the market for food packaging was estimated to be worth USD 346.5 billion in 2021 [ 101 , 102 ]. Additionally, the majority of the world's top manufacturers of bioplastics in 2022 will be located in Asia, which will account for more than 41 % of global output, compared to just 26.5 % in Europe, 18.9 % in North America, and 12.6 % in South America [ 84 ]. A burgeoning market for bioplastic packaging is being addressed by bio-based materials made from agri-food wastes, which have various advantages in terms of their positive effects on the environment. Utilizing sustainable resources across a material's whole life cycle helps with sustainability. The starch fiber, cellulose fiber, polysaccharides, chitosan, PLA, PHB, and PHA dominate the market for bio-based materials, but additional substances that might be employed as bioactive components are also present, as indicated in Table 6 . The global market for bioplastics is expected to rise from 2.23 million tons in 2022 to 6.3 million tons in 2027, providing a significant potential opportunity. In 2022, food packaging will continue to be the most popular application, accounting for 48 % of the global bioplastics market [ 101 ]. The market for cellulose fibers may exceed USD 60.01 billion by 2028 [ 105 ], and manufacturing of nanocellulose has been the subject of intense research on a global scale, with the majority of pilot and commercial production facilities situated in industrialized nations. While American Process manufactures 1000 kg/d of CNF, CellForce in Canada set up a CNC pilot production to prepare 300 tons annually. However, only a small number of businesses, like VTT, created a CNF-based plastic film for food packaging using the waste products of a food production process. The most recent market data gathered by European Bioplastic hows that in 2022, biodegradable plastics comprised more than 51 % of the world's bioplastic output, with PLA accounting for 20.7 % of that and expected to rise to 37.9 % in 2027. Regarding the PLA manufacturing chain, the fermentation processes, raw materials substrate, and lactic acid synthesis account for around 40–70 % of production expenses [ 116 ]. The application has an impact on the ultimate price, which now stands at 4.6 USD/kg and typically tracks the cost of the fermentation feedstocks. When pre-treated maize stover was employed as a substrate, a research found that the minimum selling price for lactic acid was 0.56 USD/kg; as a result, using renewable and inexpensive resources enables a more economically viable method [ 117 , 118 ]. Companies are interested in lowering the cost of eco-friendly packaging since its market share has been expanding. The PLA market is anticipated to grow by 26.6 % between 2022 and 2030, with packaging accounting for more than 36 % of sales [ 119 ]. With various technical manufacturing methods, the top competitors on the international market for PLA are Total Corbion, NatureWorks, Supla, Futerro, and Cofco [ 120 ]. Instead of using materials derived from plants, NatureWorks' technology can use greenhouse gases; Corbion is actively investigating the use of second- and third-generation feedstock, including food waste and industrial waste streams; and Futerro established a new integrated biorefinery in Europe to produce and recycle PLA [ 121 , 122 ]. Similar to the manufacturing of PLA bioplastic, the cost of the raw materials for PHA is expensive (between 30 and 40% of the overall production expenses), reported at USD 2.6/kg when sucrose is used as the carbon source, with a payback period of 2.9 years and a return on investment of USD 34.2 % [ 123 ]. When sugarcane bagasse is used as the carbon source to create P3HB, the process can become more economically competitive for an industrial facility, as was discovered, some businesses have successfully implemented this idea, as demonstrated by Bio-on, which used sugar beet byproducts and molasses as raw materials to produce PHB [ 124 ]. Genecis and Full Cycle have utilized food waste destined for landfills as raw materials to create biodegradable plastics and other high-value products [ 125 , 126 ]. Businesses have started experimenting with bioplastic options, and a growing number of major names have unveiled their first substantial goods [ 127 ]. Agri-food by-products have been proved to be potential raw materials in various biotechnology processes, which are made possible by the collaboration between businesses and academia [ 128 ]. Increased availability of bioplastics along with a variety of new materials, goods, and uses on the market have made bioplastics a desirable and well-liked option for customers. The field of biomaterials for sustainable food packaging has seen a significant upsurge in research effort during the last ten years, as Fig. 4 illustrates. The exponential rise in articles in a variety of journals emphasizes how important it is becoming to use biomaterials, especially for paper-based packaging. Notable developments include creative methods for packaging food, characterized by proactive and thoughtful packaging solutions. The incorporation of bioplastics has become a significant aspect, providing eco-friendly substitutes that lessen the environmental impact of conventional packaging materials. Beyond material composition, innovation includes intricate designs that represent a paradigm shift in the practicality and aesthetics of food packaging. Minimizing environmental footprints through sustainable techniques has been a key point. Biobased packaging materials are becoming more and more popular as effective replacements for traditional materials because they are made from renewable resources. This thorough analysis highlights the diverse development of biomaterials in food packaging and captures the dynamic research and development environment targeted at promoting sustainability and lessening the environmental impact of the packaging sector [ 129 ]. Challenges and future directions The use of biomaterials for sustainable food packaging faces a variety of difficulties and opportunities, involving a complex interaction of technological, financial, environmental, and governmental issues [ 130 ]. Finding the delicate balance between cost-effectiveness and sustainability is one of the biggest challenges we face as we make the move to more environmentally friendly packaging alternatives. Although biodegradable metals, ceramics, composites, and polymers show great promise, there are still significant barriers to their commercial scalability and cost [ 131 ]. Additionally, the incorporation of nanotechnology into packaging materials opens up fascinating possibilities for improved barrier qualities, antimicrobial activity, and shelf-life extension, but it also necessitates stringent safety evaluations and regulatory frameworks to guarantee consumer well-being [ 132 ]. An intriguing category of biomaterials called edible films and coatings has the potential to drastically cut plastic waste. For their adoption to be successful, improvements in flavor, texture, and functionality are required. An eco-friendly alternative is provided by waste-derived biomaterials, such as those made from post-consumer waste or agricultural leftovers. Nevertheless, overcoming processing obstacles and guaranteeing constant quality are necessary for their development. The sustainable packaging industry must include recyclable and biodegradable packaging materials [ 133 ]. To reach their full potential, though, effective infrastructure for collection and recycling is required, along with consumer education and adherence. The need for environmentally friendly packaging choices is rising as a result of rising consumer awareness of sustainability problems. Overall, there are significant obstacles facing the developing field of biomaterials for environmentally friendly food packaging. First of all, there is a challenge in guaranteeing scalability and cost-effectiveness without sacrificing material performance. Second, removing regulatory obstacles and creating a single framework are necessary to achieve industry-wide adoption and standardization. Furthermore, it is still difficult to balance the various needs of various food products while taking temperature, moisture content, and shelf life into account. In order to successfully integrate biomaterials into the food sector, ensure their viability as a replacement for traditional packaging, and support a more ecologically friendly and sustainable approach, it is imperative that these problems be addressed. Adopting biomaterials for environmentally friendly food packaging presents a number of difficulties. Achieving economies of scale and scalability without sacrificing material performance calls for creative production techniques. In order to get beyond regulatory obstacles, a consistent framework and thorough safety assessments of materials infused with nanotechnology must be established. It's still difficult to strike a balance between the various requirements of various food products while taking shelf life and temperature into account. Improving consumer education to encourage eco-friendly choices, developing infrastructure for effective collection and recycling, and developing processing technologies for waste-derived biomaterials are the ways to find solutions. It is imperative to tackle these obstacles in order to effectively include biomaterials and promote an eco-friendly and sustainable packaging paradigm [ 134 ]. In order to stimulate innovation and standardization in biomaterials for sustainable food packaging, cooperation between industry players, research organizations, and governmental agencies will be crucial in the future. To allow their wider acceptance in packaging applications, research efforts should concentrate on enhancing the performance and financial viability of biodegradable metals, ceramics, composites, and polymers. At the same time, careful consideration should be given to the integration of nanotechnology, with a focus on thorough safety evaluations and open labeling to win over customers. The sensory qualities and functionality of edible films and coatings must be improved in order to increase their consumer appeal and suitability for a wider range of food products. Further research should be done on waste-derived biomaterials, with an emphasis on creating scalable and effective manufacturing methods that can transform agricultural waste and post-consumer materials into high-quality packaging options. In order to ensure that these materials can be efficiently collected, processed, and reintegrated into the manufacturing cycle, recyclable and compostable packaging should be promoted. Additionally, consumer education programs ought to encourage ethical disposal and recycling methods. It is crucial to monitor and respond to changing market trends on a worldwide scale, coordinating the development of biomaterials with the changing requirements of diverse sectors [ 135 ]. This calls for ongoing market research and the adaptability to deal with new possibilities and difficulties. The worldwide use of sustainable packaging materials will also be facilitated by the harmonization of international norms and laws. Further investigation into bio-based polymers, such PLA and PHA, has the potential to yield packaging materials that degrade naturally. Investigating uses of nanotechnology, such as nanocomposites, can also improve barrier qualities and increase packaged food's shelf life. As the need for food safety and quality assurance grows, smart packaging technologies—such as sensors for in-the-moment freshness monitoring—are being integrated. It is also essential to promote the concepts of the circular economy by emphasizing the recyclable and compostable nature of biomaterials. Examining the economic viability and scalability of production techniques such as enzymatic synthesis or microbial fermentation might promote their widespread use. To ensure the safety of innovative biomaterials and create uniform testing methodologies, cooperation between industry, academia, and regulatory authorities is crucial. Sustainable food packaging will be shaped in large part by adopting a holistic strategy that takes into account every stage of the life cycle, from obtaining raw materials to disposing of waste at the end of its useful life [ 136 ]. The difficulties and potential prospects for biomaterials for sustainable food packaging highlight the importance of a comprehensive strategy. Players in the sector, researchers, and legislators will need to work together to overcome the financial, technological, and regulatory barriers. The benefits, however, are significant: less plastic waste, improved environmental sustainability, and a more thoughtful approach to packaging that meets the changing expectations of customers throughout the world. Innovation, education, and a firm commitment to building a more sustainable future for food packaging are the way forward. Data availability statement The data associated with this study has not been deposited into a publicly available repository. However, upon request, the data will be made available to facilitate transparency, peer review, and collaboration. We acknowledge the importance of sharing research data to enable other researchers to evaluate and build upon our findings, fostering trust in the scientific community. Our commitment to data availability reflects our dedication to advancing knowledge and promoting sustainable practices in biomaterials for food packaging. Additional information No additional information is available for this paper. CRediT authorship contribution statement Md. Zobair Al Mahmud: Writing - original draft, Investigation, Funding acquisition, Formal analysis, Data curation. Md Hosne Mobarak: Writing - review & editing, Methodology. Nayem Hossain: Supervision, Writing - review & editing. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
CC BY
no
2024-01-16 23:43:51
Heliyon. 2024 Jan 4; 10(1):e24122
oa_package/55/8f/PMC10788806.tar.gz
PMC10788809
38226003
Introduction Improving the poor physical health of people with severe mental illness (SMI) is a major challenge in psychiatry ( Firth et al., 2019 ). People with SMI have a substantially reduced life expectancy of 15-20 years compared to the general population ( Hjorthøj et al., 2017 ; Laursen et al., 2019 ), largely caused by poor cardiometabolic health ( Correll et al., 2022 ; Walker et al., 2015 ). Modifiable lifestyle factors such as reduced physical activity and increased sedentary behavior ( Kruisdijk et al., 2017 ; Stubbs et al., 2017 ; Stubbs et al., 2016a ; Stubbs et al., 2016b ; Vancampfort et al., 2013 ), smoking ( de Leon & Diaz, 2005 ), and poor nutrition ( Teasdale et al., 2017 ) play a major role in these poor health outcomes. Furthermore, people with SMI experience difficulties in psychosocial functioning and report a decreased quality of life, which can be related to various factors such as psychopathological symptoms, social and occupational impairments, cognitive deficits, impairments in emotional experience, and self-stigma ( Sarraf et al. 2022 ; Świtaj et al., 2012 ). Studies have related various lifestyle factors to these psychosocial impairments, including physical activity, sedentary behavior, and daily activities such as problematic smartphone use and internet gaming (K.C. Chang et al., 2022 ; Y.H. Chang et al., 2020 ; Deenik et al., 2018a ; Firth et al., 2019 ). Over the past decades, research has aimed to improve this health disparity by developing targeted interventions aimed at lifestyle factors in people with SMI ( Chu et al., 2018 ; Firth et al., 2019 , 2020 ; Huang et al., 2022 ; Teasdale et al., 2017 ). Systematic reviews and meta-analyses have demonstrated the efficacy of physical activity interventions for cardiometabolic health, psychiatric symptoms, quality of life, and global and cognitive functioning ( Firth et al., 2019 , 2020 ). Dietary interventions, often combined with physical activity interventions and psycho-education, yielded improvements in parameters of the metabolic syndrome and cardiorespiratory fitness ( Firth et al., 2020 ; Teasdale et al., 2017 ). Although there is emerging evidence showing the efficacy of lifestyle interventions in people with SMI, evidence on its effectiveness in daily routine healthcare is currently limited ( Deenik, 2019 ; Jakobsen et al., 2017 ; Naslund et al., 2017 ; Stubbs et al., 2018 ). There is a need for more research in real-world settings, including integrated lifestyle interventions and measuring and relating improvements on different physical and mental health-related outcome domains. Understanding of the mechanisms behind lifestyle improvement and how lifestyle interventions relate to different cardiometabolic health, psychosocial functioning, and mental health is essential for development of effective lifestyle intervention strategies in people with SMI. The relationship between lifestyle factors, cardiometabolic health parameters, mental health, social functioning and quality of life is complex and involves reciprocal interactions ( Firth et al., 2020 ; Vancampfort et al., 2012 ). A possible way of gaining insight into the interplay between these multiple factors is offered by the network approach, in which the organization of a system is studied by identifying system components (nodes) and the relations among them (edges) ( Borsboom et al., 2021 ). More recently, network intervention analysis (NIA) has been introduced as an extension of network models to identify treatment-induced changes in symptoms and their association structure over time ( Blanken et al., 2019 ). In this study, we applied NIA to examine direct and indirect changes in health-related outcomes (i.e., metabolic health, psychiatric symptom severity, social functioning, quality of life, and medication use) after a multidisciplinary lifestyle intervention in patients with SMI. To this end, we used data from the MULTI study, an 18-month cohort study evaluating a multidisciplinary lifestyle-enhancing treatment for inpatients with SMI ( Deenik et al., 2019 ; Deenik et al., 2018a ; Deenik et al., 2018b ). Data from this study has been previously analyzed and showed significant improvements in physical activity, metabolic health, and psychosocial functioning ( Deenik et al., 2019 ; Deenik et al., 2018a ) and a decreased use of psychotropic medication ( Deenik et al., 2018b ) after 18-months of MULTI compared to treatment as usual (TAU). In the present study, we applied NIA to gain insight in the direct and indirect changes in health-related outcomes.
Methods Study Design Data was used from the MULTI study; a cohort study evaluating a multidisciplinary lifestyle-enhancing treatment for inpatients with SMI. MULTI was implemented in February 2014 at wards for long-term mental healthcare (i.e., ≥1 year hospitalization) of a mental health care institution in the Netherlands (GGz Centraal). Due to the observational nature of the study, whereby MULTI was already implemented pragmatically at three wards before the start of this study, no randomization took place. For that reason, we accounted for the baseline variables that differed significantly between groups in the analysis, in line with previous publications of this study ( Deenik et al., 2018a , 2018b ). Full details on the study protocol are reported elsewhere ( Deenik et al., 2019 ). The study protocol was approved by Medical Ethical Committee of the Isala Academy (case 14.0678). All subjects gave written informed consent in accordance with the Declaration of Helsinki. Study Population The sample consisted of subjects with SMI who had been hospitalized for at least one year at one of the inpatient wards. They were included if they had not received any other intervention related to lifestyle within the 18 months since the start of MULTI. They were excluded if they did not understand the content of MULTI, in consultation with their attending psychiatrist. For the current analysis, we included the baseline and follow-up data of subjects with sufficient actigraphy data (a wear time of ≥6 hours/day for ≥3 days ( Deenik et al., 2017 )) and available data on physical health and social functioning outcome measures (n=106; n=65 for MULTI and n=41 for TAU). Intervention MULTI is a multidisciplinary lifestyle-enhancing treatment that focused on decreasing sedentary behavior, increasing physical activity, and improving dietary habits to achieve overall lifestyle change. The treatment method was based on improving the daily structure and participating in an active day program, including sports- and work-related activities, psychoeducation, and daily living skills training. The frequency, intensity and the kind of activities could vary between patients and wards, as they were tailored to the individual patient's illness severity, capabilities, and interests. Participation of the ward nursing team a core element of MULTI, which contributed to the culture change and support of patients. Full details on the intervention program are reported elsewhere and are provided in the supplement ( Deenik et al., 2019 ). Patients who received TAU continued their treatment at their wards, which mainly concerned pharmacological treatment and a less structured day program that did not include any supported lifestyle interventions or adjustments. Assessments At baseline, severity of psychopathology was evaluated using the Clinical Global Impression (CGI) severity index ( Nolen, 1990 ), consisting of one item (global severity of disease), rated by the psychiatrist from 1 (not at all ill) to 7 (extremely ill). At baseline and 18-month follow-up data on psychopathology, psychosocial functioning, quality of life, physical health, medication use, and actigraphy-measured physical activity were collected. Psychotic symptoms were screened by the Positive and Negative Syndrome Scale Remission tool (PANSS-r), that includes eight core symptoms of schizophrenia: general psychopathology (2 items), positive symptoms (3 items) and negative symptoms (3 items), scored from 1 (absent) to 7 (extreme) ( Kay et al., 1987 ; Van Os et al., 2006 ). Psychosocial functioning was assessed using the Health of the Nation Outcome Scales (HoNOS) or HoNOS 65+ for elderly people ( Mulder et al., 2004 ; Wing et al., 1998 ). Both scales consist of 12 items, divided into four subscales (behavioral, symptomatic and social problems and impairment), scored from 0 (no problem) to 4 (very severe problem). Quality of life was scored on the EuroQol-5D ( EuroQol Group, 2011 ) and the brief World Health Organization Quality of Life Assessment scale (WHOQoLBref) ( De Vries & Van Heck, 1997 ; Su et al., 2014 ). The EQ-5D consists of five items, each measuring a dimension of health: mobility, self-care, usual activities, pain/discomfort and anxiety/depression, rated from 1 (no problems) to 3 (many problems). We calculated an index score ranging from 0 (worst QoL) to 1 (perfect QoL) ( Lamers et al., 2006 ). The WHOQoL-Bref contains 24 items that represent four domains of one's perceived quality of life: the physical (7 items), psychological (6 items), social (3 items), and environmental domain (8 items). Item scores ranged from 1 (very dissatisfied) to 5 (very satisfied) and were transformed into domain scores ranging from 4 to 20, according to the WHO guidelines ( WHO, 1996 ). The following physical health parameters were assessed: weight, abdominal girth, blood pressure, fasting glucose, triglycerides, total and HDL-cholesterol. Mean arterial pressure was calculated as the sum of one-third systolic blood pressure and two-thirds diastolic blood pressure. Psychotropic and somatic medication use was converted into defined daily dose (DDD) according to the Anatomical Therapeutic Chemical (ATC) Classification System ( World Health Organisation, 2017 ). Physical activity was measured with the ActiGraph GT3X+ (ActiGraph, Pensacola, Florida, VS), a hip-worn triaxial accelerometer. For the current analysis, average total activity counts per hour (TAC/h) was used as a continuous and detailed outcome variable of physical activity during daytime, where more counts indicate a higher level of physical activity. Severity of psychosocial functioning was scored by the responsible psychiatrist or nurse practitioner (not blinded to the treatment condition) and all other data were collected by trained research assistants. Although research assistants were not actively informed about the treatment condition, blinding was not assured due to visible differences in the day-to-day program. A detailed description of used settings and criteria for valid measurement is described elsewhere ( Kruisdijk et al., 2017 ). Statistical Analyses Descriptives and Pre-processing of Data All statistical tests were conducted using R version 4.04 ( R Team, 2013 ). First, baseline characteristics were compared between groups (MULTI vs. TAU) using Chi-squared statistics for categorical variables and independent t-tests for continuous variables. Change scores of the variables from baseline to 18-month follow-up were calculated. Independent t-tests were performed to determine whether change scores differed between the two treatment conditions, using Bonferroni corrections for statistical significance (i.e., p<.0045, based on 11 comparisons). Continuous variables were examined for normality and homogeneity by comparing means with medians and standard deviations and by analyzing frequency histograms and normality plots. Second, we inspected the data for missing values of the selected variables. Data was missing in 0.94-9.4% of the variables, resulting in 90 complete and 16 incomplete cases. Missing data was inspected by groups of complete and incomplete cases using Chi-squared or independent t-tests. Missing data was assumed to be missing at random and was imputed using multiple imputation with chained equations, using the MICE package ( van Buuren & Groothuis-Oudshoorn, 2011 ). Predictive mean matching was used to impute missing values, the preferred approach for multiple imputation that produces the least biased estimates ( Marshall et al., 2010 ; van Buuren & Groothuis-Oudshoorn, 2011 ). We imputed the data five times and randomly selected one imputed dataset to estimate our network model. To ensure that our findings were not influenced by the selected the imputed dataset, we also estimated a network on the four other imputed datasets and checked if the most important edges were included in all networks, guided by previous literature (see Supplement) ( Liu et al., 2021 ). Network Construction We used NIA to examine the impact of MULTI on the changes in physical and mental health outcomes. For the NIA, we included the change scores from baseline to follow-up of the outcome measures as continuous variables, and the treatment allocation (MULTI or TAU) as a binary variable. Considering the relatively small sample size (n=106) for network analysis, we used a selection of 11 variables representing the symptomatic, functional and physical health outcome domains measured in the sample, considering the most significant changes in previous research ( Deenik et al., 2018a , 2018b , 2019 ) and most relevant outcome measures. Selected variables were: sum scores of the positive and negative symptoms (PANSS-r), social functioning (HoNOS), quality of life (EQ-5D and WHOQOL-BREF), psychotropic and somatic medication (DDD), total physical activity (TAC/h), abdominal girth, total cholesterol, and mean arterial pressure. Also, we added two baseline measures that differed significantly between the MULTI and TAU group: symptom severity (CGI; mean difference -0.79, 95% CI -1.23 to -0.29) and age (mean difference 6.51, 95% CI 2.05 to 10.96). Diagnosis did also differ between groups (χ 2 (1)=15.83, p<.0001), but was significantly (t(31)=2.73, p=.01) related to symptom severity, and was therefore not added to the model. The continuous variables included in the network model were normally distributed and could therefore be included for the network analysis without any transformation procedures. A Mixed Graphical Model (MGM) was used to estimate the network using the bootnet ( Epskamp et al., 2018 ) and mgm ( Haslbeck & Waldorp, 2020 ) package. Networks are composed of nodes (variables) and edges, where the edges represent undirected conditional dependence relationships between the nodes. Thus, they indicate the association between two nodes controlling for their associations with all other variables of the network. We applied LASSO regularization to estimate the network structure and because sample size was relatively small for the number of parameters, we applied cross-validation to select the LASSO tuning parameter, a recommended method to discover the most important edges and the overall network structure in small samples ( Isvoranu & Epskamp, 2021 ). Following recommendations in this field, we used nonparametric bootstrapping (bootstrapped samples n=1000) to assess accuracy of the edge estimates (see Supplement) ( Epskamp et al., 2018 ). Sensitivity Analyses We have run sensitivity analyses to assess to what extent the included links were sensitive to i) imputation of missing data, ii) tuning parameter selection and iii) inclusion of baseline variables (CGI and age). We inspected the networks visually and we compared node strength between the networks, using the qgraph package ( Epskamp et al., 2012 ). Node strength is a centrality index that quantifies how strong a node is indirectly connected to other nodes. Results can be found in the Supplement.
Results Sample The sample consisted of 106 subjects (MULTI, n=65; TAU, n=41), of whom 66 (62%) were male and 40 (38%) were female. Mean age was 54.7 years (SD=10.8) and 82.1% of patients were diagnosed with schizophrenia or other psychotic disorder, 17.9% with another diagnosis (e.g., bipolar disorder, personality disorder). Measurements at baseline and follow-up of the included physical and mental health-variables are presented in Table 1 and Fig. 1 . Differences in change scores from baseline to follow-up were statistically significant (p<.0045) between groups for psychotropic medication (mean difference 1.04, 95% CI 0.36 to 1.73), negative symptoms (mean difference 3.76, 95% CI 1.59 to 5.93), and abdominal girth (mean difference 4.57, 95% CI 1.51 to 7.64). Network Fig. 2 presents the network of MULTI in relation to mental and physical health-related changes from baseline to 18-month follow-up. In order of connection strength, results show that MULTI is directly connected to a decrease of negative symptoms (Neg), a decrease of psychotropic medication dosage (Pmed), an increase of actigraphy-measured physical activity (TAC), a decrease of abdominal girth (Abd), an increase of social functioning (reduction on the HoNOS score; Soc), and to an increase of positive symptoms (Pos). The network also identified associations between the different outcome variables, indicating that changes in these outcomes affect one another. These reciprocal connections were found within the outcome domains, for example increased scores within the quality of life (WQOL-EQ5D) and symptom severity (Pos-Neg) domain are related to each other. Also, connections were found between the different outcomes, for example increased positive symptoms (Pos) and decreased quality of life (WQOL) and increased physical activity (TAC) and decreased abdominal girth (Abd) are reciprocally related. Mean arterial blood pressure (MAP) and somatic medication use (Smed) were not connected to the network and visualized as separate nodes. The baseline variables CGI and Age are directly related to MULTI; Age was lower and CGI was higher in MULTI compared to TAU. Bootstrapping and sensitivity analysis showed that important direct links between MULTI and negative symptoms and MULTI and PMed were included in about 90% of the bootstrap samples, indicating high stability (see Supplement).
Discussion Main Findings Using NIA, this study examined the direct and indirect effects of a multidisciplinary lifestyle enhancing treatment on the physical and mental health-related outcomes in people with SMI. Our main finding was that MULTI was directly associated with improvements in a range of health-related outcomes (i.e., negative symptoms, social functioning, prescription of psychotropic medication, actigraphy-measured physical activity, and abdominal girth). Notably, these improvements in symptomatic, functional, and physical domains were independent of each other, suggesting a unique association between MULTI and changes in these distinct outcome domains. Secondly, we identified conditional associations between the other outcome measures within and between the different outcome domains, that may represent potential indirect effects of MULTI (e.g., MULTI is linked to a decrease in abdominal girth, which is in turn linked to a decrease in total cholesterol and may therefore indirectly influence cholesterol). To our knowledge, this is the first study that applied network estimation techniques to investigate direct and indirect effects of a lifestyle intervention in mental health research. The network model confirms the central role of MULTI in relation to independent changes in physical and mental health-related outcomes. The network revealed that improvements in the different outcome variables were related in patterns, with connections that were often intuitively plausible. Only the direct link between MULTI and worsening of positive symptoms (e.g., hallucinations, paranoia, delusions, disorganization) could not be explained by clinical reasoning, as one would expect an absent or inverse relationship. When interpreting these findings, it is important to realize that the edges in the network represent conditional dependence relationships, meaning that they are conditioned for all other variables in the network. Considering the stronger links of MULTI-Neg and Neg-Pos, the positive relationship between MULTI-Pos could potentially be explained as a collider effect, resulting from a causal structure where the level of negative symptoms is caused both by MULTI and positive symptoms (i.e., MULTI → negative symptoms ← positive symptoms). That is, the treatment leads to a reduction in negative symptoms, and an increase in positive symptoms leads to an increase in negative symptoms. Conditioning on such a collider structure would explain the positive conditional dependence relationship between MULTI-Pos ( Epskamp et al., 2022 ). This explanation is supported by our findings of i) a non-significant marginal association between MULTI-Pos and ii) an unstable edge between MULTI-Pos in the imputed datasets and bootstraps. Therefore, we are reluctant to interpret this positive link between MULTI and worsening of positive symptoms. Relevance of Findings The results of this study shed light on the working mechanism of lifestyle interventions on physical and mental domains in people with SMI, by using novel network estimation techniques and thereby combining various subjectively and objectively measured health-related outcomes into one network model. Although previous studies have found similar effects of lifestyle interventions on physical activity interventions on cardiometabolic health, psychiatric symptom severity, and social functioning in people with SMI ( Czosnek et al., 2019 ; Schmitt et al., 2018 ; Stubbs et al., 2018 ; Vancampfort et al., 2019 ), the underlying psychobiological mechanisms of these effects are still poorly understood ( Stubbs et al., 2018 ). Separately analyzing health-related outcomes of lifestyle interventions, when they are interrelated, may impede the refinement of theory and thereby the development of effective, targeted interventions. By applying NIA, this study provides insight into the potential direct and indirect effects of a lifestyle intervention on both physical and mental outcome domains. These findings provide guidance for the development, research, and implementation of these interventions in routine daily care. Targeting of relevant outcome measures is a key element for the development of effective lifestyle intervention strategies and thus reducing excess mortality and burden of disease in people with SMI. The treatment-induced changes in negative symptoms and social functioning in this study are particularly relevant since the search for interventions targeting negative symptoms and cognitive dysfunction is a major issue in schizophrenia research. Antipsychotics are effective to reduce positive symptoms, but are generally of less benefit for negative symptoms and cognitive deficits. Other psychosocial and psychological interventions (e.g., cognitive behavioral therapy, assertive community treatment) may reduce negative and cognitive symptoms and improve social functioning ( Bighelli et al., 2021 ; Vita et al., 2021 ), but they are costly and access is poor ( Schizophrenia Commission, 2012 ). The current literature supports the findings of this study, which demonstrates the effect of lifestyle interventions on improving negative symptoms, cognitive deficits, and psychosocial functioning in individuals with SMI ( Firth et al., 2020 ). Therefore, lifestyle interventions could serve as effective, accessible, and low-cost treatment options to improve disease outcome in SMI. Limitations Some limitations of this study should be considered. The first and most important limitation is the sample size of the study that is relatively small for a network analysis, which limits the generalizability and stability of the results ( Isvoranu & Epskamp, 2021 ). For this reason, we conducted an exploratory analysis to identify the most important edges and overall network structure and performed additional analyses (i.e., bootstrapping and sensitivity analyses) to examine the accuracy and stability of the network. Furthermore, we included a limited number of variables in the network model to account for the relatively small sample size. By using previous literature ( Czosnek et al., 2019 ; Schmitt et al., 2018 ; Stubbs et al., 2018 ; Vancampfort et al., 2019 ) and previously analyzed results from this study ( Deenik et al., 2018a , 2018b , 2019 ), we aimed to select the most clinically relevant outcome variables. Although results need to be replicated in larger samples, this study is the first to explore direct and indirect effects of a lifestyle intervention on various physical and psychological outcome domains in patients with severe mental illness and may therefore provide important leads for future research ( Isvoranu & Epskamp, 2021 ). Second, as we estimated undirected conditional dependence relationships in the network model, causal relationships cannot be implied. In addition, we could not investigate the temporal development of changes in the outcome measures since we only had pre- and post-treatment measurements. Still, results from this study may be helpful in generating hypotheses regarding the working mechanisms of treatment by identifying direct and indirect lifestyle-intervention-related changes in the outcome measures. Third, due to the observational nature of this study, whereby MULTI was already implemented pragmatically in three wards before start of the study, no randomization took place. Consequently, the MULTI and TAU groups were not similar in size and characteristics, which may have confounded our results. Therefore, we accounted for baseline differences in our analysis, by including the baseline variables that differed between groups as nodes in the network model and comparing the networks with and without the baseline variables (see Supplement) . Limitations should be considered in future studies by using larger sample sizes, including multiple time points, and by adding more outcome variables (e.g., cognitive outcome measures) or by breaking variables down into subgroups (e.g., subgroups of psychotropic medication) to investigate lifestyle-intervention-related changes into more detail. Conclusion This study provides a novel network approach to unravelling the complex effects of lifestyle interventions on physical and mental health outcomes in patients with SMI. Findings indicate that a multidisciplinary lifestyle intervention may directly influence (negative) symptom severity, medication use, social functioning, and physical activity. These insights provide guidance for the development, research, and implementation of lifestyle intervention strategies in people with SMI to improve their poor health status and reduced life expectancy.
Conclusion This study provides a novel network approach to unravelling the complex effects of lifestyle interventions on physical and mental health outcomes in patients with SMI. Findings indicate that a multidisciplinary lifestyle intervention may directly influence (negative) symptom severity, medication use, social functioning, and physical activity. These insights provide guidance for the development, research, and implementation of lifestyle intervention strategies in people with SMI to improve their poor health status and reduced life expectancy.
Background/Objective The effects of lifestyle interventions on physical and mental health in people with severe mental illness (SMI) are promising, but its underlying mechanisms remain unsolved. This study aims to examine changes in health-related outcomes after a lifestyle intervention, distinguishing between direct and indirect effects. Method We applied network intervention analysis on data from the 18-month cohort Multidisciplinary Lifestyle enhancing Treatment for Inpatients with SMI (MULTI) study in 106 subjects (62% male, mean age=54.7 (SD=10.8)) that evaluated changes in actigraphy-measured physical activity, metabolic health, psychopathology, psychosocial functioning, quality of life and medication use after MULTI (n=65) compared to treatment as usual (n=41). Results MULTI is directly connected to decreased negative symptoms and psychotropic medication dosage, and improved physical activity and psychosocial functioning, suggesting a unique and direct association between MULTI and the different outcome domains. Secondly, we identified associations between outcomes within the same domain (e.g., metabolic health) and between the domains (e.g., metabolic health and social functioning), suggesting potential indirect effects of MULTI. Conclusions This novel network approach shows that MULTI has direct and indirect associations with various health-related outcomes. These insights contribute to the development of effective treatment strategies in people with severe mental illness. Key words
Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
CC BY
no
2024-01-16 23:43:51
Int J Clin Health Psychol. 2024 Jan 9 Jan-Mar; 24(1):100436
oa_package/34/37/PMC10788809.tar.gz
PMC10788810
38226268
Introduction Rapid population growth, urbanization, and industry are the main causes of the rising energy demand [ 1 ]. Since they are not only sustainable but also economical, alternative energy sources are gaining popularity as a result of the substantial financial savings they may produce. Between 2020 and 2040, it is predicted that the world's energy consumption will rise by 37 %, necessitating the development of new energy sources [ 2 ]. Providing us with power, heating, cooling, and other services, energy is a crucial component of contemporary life [ 3 ]. Although fossil fuels are the most widely utilized energy source, they are non-renewable and have a limited supply [ 4 ]. This has prompted a quest for alternative energy sources that are renewable and sustainable. As a green response to the expanding energy demand, these resources have grown in prominence over the past few centuries [ 5 ]. As the globe strives for cleaner and more effective sources of electricity, alternative energy sources including solar, wind, hydropower, geothermal, biomass, and nuclear power are growing more and more significant [ 6 ]. These forms of energy can lessen pollution and lessen the consequences of worldwide warming and are less expensive than fossil fuels [ 7 ]. Utilizing these new energy sources and using energy more effectively is key to ensuring the future supply of energy [ 8 ]. Today, the globe relies heavily on electricity to fulfill a variety of demands. The problems with the current power management systems get worse as long as there is a continued increase in the electricity demand. Microgrids are crucial for electricity systems in this regard [ 9 ]. Traditional power systems are under tremendous strain as a result of the rising electricity demand, which might result in regular outages or blackouts. Microgrids are a good option for managing the power supply more effectively [ 10 ]. Small electrical systems known as microgrids are generally linked to the primary utility grid but may be cut off from it under specific circumstances, such as a grid breakdown or an emergency [ 11 ]. The microgrid can build an "islanded" network of users and can offer electricity backup electricity [ 12 ]. The term "microgrid" refers to a small-scale network of distributed resources for energy that are often situated near other resources. It is made up of several parts, including local loads, energy storage systems, solar panels, wind farms, managing and controlling networks, and technology for communications [ 13 ]. The desire to lower greenhouse gas emissions and improve the reliability and safety of the electrical supply is what is driving the growth of microgrids. Microgrids provide a higher level of autonomy, enabling users to control their energy generation and consumption, which can be advantageous for the energy sector's financial and environmental impacts [ 14 ]. By utilizing renewable energy sources, lowering greenhouse gas emissions, and promoting green energy generation and consumption, microgrids may support an energy system that is favorable to the environment. Management of electricity is becoming increasingly significant in today's society. Due to their capacity to deliver safe, dependable, and affordable energy services, microgrids have shown to be a successful option. An overview of microgrids and their significance as a crucial element of the ideal energy management system is given in this study [ 15 ]. Organizations may improve energy effectiveness and financial savings, lessen dependency on the main electrical system, and deliver electricity to remote regions or those with restricted access to the main electrical network by utilizing the special characteristics of microgrids [ 16 ]. Any ideal energy management system must include microgrids for enterprises to achieve the highest levels of energy efficiency and cost savings. Microgrids will be even more crucial as a tool for guaranteeing efficient and effective energy management as technology develops [ 17 ]. AI is being utilized to address complicated challenges in microgrids, and managing electricity optimization is becoming more and more crucial to how microgrids are run. The advantages of employing AI to solve these problems will be covered in this article along with any potential obstacles that may need to be solved to fully realize AI's capabilities. The article will focus on the role of AI in addressing microgrids by applying energy management optimization methodologies and technologies [ 18 ]. Energy management optimization is the process of adjusting a system's energy usage depending on its present and potential future energy needs, such as a microgrid. AI assistance or physical labor are also options for completing this. Reduced energy use, financial savings, and a sustainable energy supply are the goals of energy management optimization [ 19 ]. To optimize energy management, AI is mostly used to identify effective solutions to challenging issues that arise inside the microgrid. Artificial intelligence (AI) algorithms can quickly evaluate vast volumes of data to find patterns and trends. These discoveries may then be applied to optimize the microgrid's energy use and raise its effectiveness [ 20 ]. The use of AI to optimize energy management has several advantages [ 21 ]. AI algorithms, for instance, may be used to automate energy management choices and more accurately estimate energy consumption. AI-based solutions can be employed as well to find potential for energy savings and offer suggestions for how to improve energy use. AI may also facilitate more efficient processes for making decisions and lower operational expenses [ 22 ]. An effective method for raising the effectiveness and sustainability of microgrids is energy management optimization. Engineers and scientists can handle challenging issues in the microgrid by using AI for energy management optimization. Utilizing AI to optimize energy management has numerous potential advantages, but certain hazards and difficulties need to be taken into consideration [ 17 ]. Energy management is now more crucial than ever as the globe enters a new era of technological development [ 22 ]. Systems for managing energy have been created to maximize the use of existing energy resources as many nations work to make their networks "greener" and minimize emissions [ 23 ]. To enhance the efficiency and dependability of the system, these systems employ optimization approaches to utilize and integrate alternative energy sources, such as solar and wind power. This requires applying several iterations of an optimization method to the system [ 24 ]. Technology improvements have made it possible for researchers to identify methods to enhance managing electricity optimization in microgrids as the globe faces more complicated energy concerns. Small-scale power systems known as microgrids are used to supply electricity to nearby areas. They are often linked to the primary electrical grid [ 25 ]. They are playing a bigger role in the effective and trustworthy distribution of energy, especially in places with few resources and poor access to power. Since various studies on the improvement of energy management in microgrids have been carried out recently and are discussed in the following examples, there has been an increase in interest in the study and development of energy management systems for microgrids. M. A Kamarposhti [ 26 ] and colleagues proposed a management system for optimal microgrid operations, factoring in existing capacities in the electricity market. The operator of the microgrid, responsible for its secure operations, must engage in a planning process that makes the most of the network's components. To achieve this, they sought to guarantee adequate reliability in the generation resources, to reduce costs and environmental pollution from energy production. This was where the artificial bee colony (ABC) algorithm offers a solution, as it had been used to optimize costs and minimize environmental pollution by finding the optimal production power of distributed generation. The Archimedes Optimization Algorithm (AOA) was crafted by Al-Gazzar, M. M. et al. [ 27 ] drawing inspiration from the buoyancy principle and embodying a meta-heuristic optimization algorithm. The purpose of this algorithm was to identify the most cost-effective operation of interconnected microgrids (IMGs), which incorporate various forms of distributed generation (DG), namely solar photovoltaic (PV), wind turbine (WT), and micro-turbine (MT). The ultimate aim of this program was to minimize the aggregate cost of power generation, taking into account the exchange of power between the IMGs and the utility, all while factoring in any underlying technical constraints. The AOA algorithm's efficiency was manifested through a comparison with the particle swarm optimization algorithm (PSO), which represents another optimization method. The results of this comparison highlighted the potential of the AOA approach to curtail electricity consumption, decrease electricity costs and utility bills, and advance micro turbine (MT) performance for different daily loads by governing energy transfer between microgrids and the utility. A metaheuristic algorithm was devised by Abaeifar, A. et al. [ 28 ] to tackle the EMS problem - the Inertia-Weight Local-Search based Teaching-Learning-Based Optimization (IWLS-TLBO). This approach was inspired by the human ability to learn, where self-perception and regulation are taken into consideration while making decisions based on past experiences. To assess the efficacy of the IWLS-TLBO algorithm with its predecessor, TLBO, and other metaheuristics, a comparison was drawn between the outcomes of the IWLS-TLBO algorithm on a range of benchmark functions and those yielded by TLBO and its counterparts. This analysis aimed to demonstrate the IWLS-TLBO algorithm's capacity to delve into uncharted territory and make the most of established solutions, thereby enabling more fruitful exploration of the search space. In the realm of isolated microgrids reliant on renewable energy sources, a quandary concerning the efficient allocation of resources plagued emergency management services. However, this issue was rectified through the optimization of a unit commitment problem via the IWLS-TLBO algorithm. The fruits of this labor were evidenced in simulation results that established the superiority of the IWLS-TLBO algorithm over other metaheuristic algorithms in both solution quality and convergence speed. Reza Sepehrzad et al. [ 29 ] suggested a control strategy incorporating particle swarm optimization (PSO) and energy management algorithms to enhance the reliability, control levels, and incorporation of microgrids into existing electrical grids through improved power distribution. The proposed operational strategy relied on the load profile and power generation resources to predict power. To refine the energy management strategies, a multi-objective problem was solved with the PSO algorithm, and the optimization results were given to the fuzzy controller and power distribution management (PDM) unit. This improved power distribution in electrical grids. A comprehensive operating procedure for islanded and grid-connected microgrids, which takes into account their stability against grid fluctuations, was composed of an optimizer, a PDM unit, and a fuzzy controller. Additionally, to support the HESS and bolster its reliable performance, an auxiliary power control unit (APCU) is proposed. The utilization of MATLAB/Simulink was used to evaluate the effectiveness of the proposed structure when applied to the net power of islanded and grid-connected microgrids. This structure divides power into two components-high-frequency (super-capacitor) and low-frequency (battery and APCU). The results of the proposed algorithm and simulation were then analyzed. The effectiveness of CSOS for solving the EMO problem in Microgrids was assessed by Omar, B. et al. [ 30 ] by using a chaotic symbiotic organism search (CSOS) algorithm. With the aid of a chaotic map, the approach was able to rapidly converge with a wider search space coverage when looking for solutions under various exploiting constraints. These results were then compared to those acquired from other scalable algorithms, such as GA and PSO, in terms of operating costs on a practical microgrid linked to public services. The comparison proved the efficiency of CSOS. In terms of the objective function, the novelty of MCGA is not in the objective function itself but in the optimization method and techniques applied to the TEMS problem. The objective is to optimize the parameters and minimize the daily electricity price in an integrated clean energy microgrid. The contribution of the proposed approach is to modify and adapt the Chaos Grasshopper algorithm, specifically designed for TEMS in microgrids. The MCGA incorporates innovative strategies to enhance the optimization process and improve the solution quality. These modifications result in a significantly improved optimal solution for the overall daily electricity price compared to existing research approaches. By conducting comparative simulations with established methods like the Hybrid Optimization Model for Electric Renewables (HOMER), GAMS, Grey wolf optimizer (GWO), and Mixed-Integer Linear Programming Approach (MILPA), the effectiveness and superiority of the MCGA in achieving better solutions for the objective function are demonstrated. Therefore, while the objective function remains the same as in existing research, the novelty of the proposed approach lies in the modified Chaos Grasshopper Algorithm optimization, leading to improved results and contributing to the advancement of techno-economic energy management in microgrids. Future scope: Future research should integrate emerging energy storage technologies, optimize power generation schedules, consider demand response mechanisms, evaluate advanced control strategies, and analyze environmental and economic factors. This will improve energy management and system efficiency, synchronizing energy supply and demand. Evaluating advanced control strategies' impact on performance, stability, and energy production can contribute to effective hybrid system management. Advancements: The research study optimizes hybrid energy systems for efficiency, cost reduction, grid resilience, scalability, adaptability, and transferability. The algorithm intelligently manages energy sources, storage systems, and control mechanisms, resulting in increased energy production and reduced losses. Cost reduction is achieved by maximizing renewable energy production, minimizing fossil fuel-based power generation, and optimizing energy storage systems for peak demand periods. The algorithm's flexibility enhances grid resilience and can be applied to other renewable energy systems, improving efficiency and cost-effectiveness across multiple technologies. Application: Optimizing hybrid energy systems can lead to various applications, such as renewable energy integration, microgrid and off-grid systems, energy cost savings, grid resilience, peak shaving, demand response, and decentralized power generation. These systems maximize renewable energy utilization, reduce fossil fuel dependence, and promote sustainable, efficient energy solutions, contributing to a cleaner and more resilient future. Enhancements: The enhancements study focuses on optimizing hybrid energy systems by analyzing efficiency, reliability, and energy output. Researchers propose improvements in renewable energy technologies, battery storage capabilities, and advanced control algorithms. They explore energy storage technologies like advanced battery, compressed air, and hydrogen storage, and address grid integration and interconnection for seamless operation. The study proposes innovative solutions to improve performance, reliability, cost-effectiveness, and scalability, paving the way for a more sustainable and resilient energy infrastructure. Constraints: Hybrid energy system optimization faces constraints such as technological limitations, resource availability, economic considerations, policy frameworks, stakeholder engagement, and integration. Technological limitations and site-specific factors impact performance and feasibility. Economic constraints, such as initial investment costs and payback periods, require addressing. Researchers can invest in renewable energy and energy storage technologies, and collaborate with policymakers and stakeholders to overcome these challenges.
Results and discussions This research aims to develop a new approach to optimize the energy production of a hybrid system. The proposed approach involves using an optimization algorithm to determine the optimal operating conditions of the hybrid system components, such as Photovoltaic, Fuel Cells, and BESS, to maximize energy production. Fig. (4) shows a flowchart of the methodology steps. To ensure the efficient operation of the microgrid, it was necessary to simulate its performance under varying conditions. The simulation was carried out using solar radiance and load fluctuation as inputs to test the effectiveness of the energy management strategy that had been proposed. The micro-grid system that was used in the simulation had a DC bus voltage of 400 V and a maximum power output of 352.2 kW. It also had a battery capacity of 1400 Ah, with minimum and maximum values for the state of charge (SOC) set at 8 and 68% respectively. Fig. (5) provides a visual representation of the typical PV power and load power profiles for the microgrid system that was studied. The graph shows how these two parameters vary over time, with PV power being generated during daylight hours and load power fluctuating throughout the day. The simulation allowed for an in-depth analysis of how the energy management strategy would perform under different scenarios. By testing the system's response to varying levels of solar radiance and load fluctuations, it was possible to identify any potential issues or areas for improvement. Consequently, the simulation provided valuable insights into how the microgrid system would operate in real-world conditions. By using this information, it is possible to optimize its performance and ensure that it meets the energy needs of those who rely on it. Fig. (5) illustrates the typical power generated by PV and consumed by loads in the system under investigation. Cost parameters have been supplied for both the load and the PV systems (probably referring to solar installations) based on [ 41 ]. These cost parameters are probably connected to some modeling or simulation that is being done, potentially in the area of electrical engineering or renewable energy. Fig. (6) associated with [ 41 ] is being used to provide these cost parameters. This figure could potentially include a graph or chart displaying the different costs associated with implementing and operating the load and PV systems. Understanding the economics of the system is an important case. The price of electricity can vary widely depending on where it is generated, how it is generated, and who is supplying it. Factors like government subsidies, infrastructure costs, and maintenance costs can all play a role in determining the final price that consumers pay for electricity. As the world shifts towards renewable energy sources, we can expect the cost of electricity generated by wind, solar, and other renewable technologies to continue to fall, making them more competitive with traditional power sources. The Modified Chaos Grasshopper Algorithm was compared with different optimization algorithms such as Hybrid Optimization Model for Electric Renewables (HOMER) [ 42 ], GAMS [ 43 ], Grey wolf optimization (GWO) [ 20 ], Mixed-Integer Linear Programming Approach (MILPA) [ 44 ] to evaluate its efficiency. As illustrated in Fig. (6) , the power demand was higher than the available solar power. To provide enough power, the energy management strategy proposed assigning a power reference for the grid, battery, and fuel cell. The optimal solution is shown in Fig. (7) and indicates that the suggested algorithm was successful. Two principal electricity pricing periods—from 6:30 to 24:00 and from 24:00 to 6:30—are depicted in Fig. (7) . Battery power and fuel cell energy are more expensive than grid electricity during the low grid pricing era. As a result of the lower power rates during the 1 to 6-h and 24-h periods, the main grid can provide the bulk of the demand. After six in the morning, the fuel cell takes over for energy production, and the battery supplies energy when the weather changes. After 14:00, grid prices started to decline, which led to an increase in load power that peaked at 16:00. The fuel cell is usually the critical energy source during this timeframe, as evidenced by the electricity profiles. However, the power from the grid is regulated by market tariffs. The battery's state of charge is used to manage the exchanged power. Fig. (7) displays the battery SoC profiles for multiple comparison techniques. Table 4 includes data and simulation results for the MCGA and compares it with other methods that were studied. The table records essential metrics such as the final “SoC” (State of Charge), which refers to the amount of energy stored in a battery or cell, as well as the method's efficiency, which reflects the ratio of energy output to energy input. Additionally, the table includes information about optimizer efficiency, which is a measure of how well the method can find optimal solutions, and the mean operating cost, which indicates how expensive it is to operate using this method on average. By examining these values and comparing them to other methods, researchers can gain insights into the effectiveness and practicality of using the MCGA method for their purposes. So, the data presented in Table 4 is crucial for understanding the simulation results and making informed decisions based on them. Table 4 in the study compares the battery's efficiency and final state of charge (SoC) using different metaheuristic algorithms. The results indicate that the proposed MCGA provides better results than the other comparative methods. However, the values obtained are low, which means there is still room for improvement in the performance of the proposed method. Therefore, the study concludes that using metaheuristic-based algorithms to optimize techno-economic energy management strategies in microgrids is useful. The proposed method provides a more effective optimal solution than the other current approaches, with high precision and flexibility, and resistance to changes in power prices and environmental constraints. In summary, the study demonstrates the potential of metaheuristic algorithms for optimizing the performance of techno-economic energy management strategies in microgrids. The proposed Modified Chaos Grasshopper Algorithm provides a more effective solution than other current approaches. However, the study also highlights the need for further improvements in the performance of the proposed method.
Results and discussions This research aims to develop a new approach to optimize the energy production of a hybrid system. The proposed approach involves using an optimization algorithm to determine the optimal operating conditions of the hybrid system components, such as Photovoltaic, Fuel Cells, and BESS, to maximize energy production. Fig. (4) shows a flowchart of the methodology steps. To ensure the efficient operation of the microgrid, it was necessary to simulate its performance under varying conditions. The simulation was carried out using solar radiance and load fluctuation as inputs to test the effectiveness of the energy management strategy that had been proposed. The micro-grid system that was used in the simulation had a DC bus voltage of 400 V and a maximum power output of 352.2 kW. It also had a battery capacity of 1400 Ah, with minimum and maximum values for the state of charge (SOC) set at 8 and 68% respectively. Fig. (5) provides a visual representation of the typical PV power and load power profiles for the microgrid system that was studied. The graph shows how these two parameters vary over time, with PV power being generated during daylight hours and load power fluctuating throughout the day. The simulation allowed for an in-depth analysis of how the energy management strategy would perform under different scenarios. By testing the system's response to varying levels of solar radiance and load fluctuations, it was possible to identify any potential issues or areas for improvement. Consequently, the simulation provided valuable insights into how the microgrid system would operate in real-world conditions. By using this information, it is possible to optimize its performance and ensure that it meets the energy needs of those who rely on it. Fig. (5) illustrates the typical power generated by PV and consumed by loads in the system under investigation. Cost parameters have been supplied for both the load and the PV systems (probably referring to solar installations) based on [ 41 ]. These cost parameters are probably connected to some modeling or simulation that is being done, potentially in the area of electrical engineering or renewable energy. Fig. (6) associated with [ 41 ] is being used to provide these cost parameters. This figure could potentially include a graph or chart displaying the different costs associated with implementing and operating the load and PV systems. Understanding the economics of the system is an important case. The price of electricity can vary widely depending on where it is generated, how it is generated, and who is supplying it. Factors like government subsidies, infrastructure costs, and maintenance costs can all play a role in determining the final price that consumers pay for electricity. As the world shifts towards renewable energy sources, we can expect the cost of electricity generated by wind, solar, and other renewable technologies to continue to fall, making them more competitive with traditional power sources. The Modified Chaos Grasshopper Algorithm was compared with different optimization algorithms such as Hybrid Optimization Model for Electric Renewables (HOMER) [ 42 ], GAMS [ 43 ], Grey wolf optimization (GWO) [ 20 ], Mixed-Integer Linear Programming Approach (MILPA) [ 44 ] to evaluate its efficiency. As illustrated in Fig. (6) , the power demand was higher than the available solar power. To provide enough power, the energy management strategy proposed assigning a power reference for the grid, battery, and fuel cell. The optimal solution is shown in Fig. (7) and indicates that the suggested algorithm was successful. Two principal electricity pricing periods—from 6:30 to 24:00 and from 24:00 to 6:30—are depicted in Fig. (7) . Battery power and fuel cell energy are more expensive than grid electricity during the low grid pricing era. As a result of the lower power rates during the 1 to 6-h and 24-h periods, the main grid can provide the bulk of the demand. After six in the morning, the fuel cell takes over for energy production, and the battery supplies energy when the weather changes. After 14:00, grid prices started to decline, which led to an increase in load power that peaked at 16:00. The fuel cell is usually the critical energy source during this timeframe, as evidenced by the electricity profiles. However, the power from the grid is regulated by market tariffs. The battery's state of charge is used to manage the exchanged power. Fig. (7) displays the battery SoC profiles for multiple comparison techniques. Table 4 includes data and simulation results for the MCGA and compares it with other methods that were studied. The table records essential metrics such as the final “SoC” (State of Charge), which refers to the amount of energy stored in a battery or cell, as well as the method's efficiency, which reflects the ratio of energy output to energy input. Additionally, the table includes information about optimizer efficiency, which is a measure of how well the method can find optimal solutions, and the mean operating cost, which indicates how expensive it is to operate using this method on average. By examining these values and comparing them to other methods, researchers can gain insights into the effectiveness and practicality of using the MCGA method for their purposes. So, the data presented in Table 4 is crucial for understanding the simulation results and making informed decisions based on them. Table 4 in the study compares the battery's efficiency and final state of charge (SoC) using different metaheuristic algorithms. The results indicate that the proposed MCGA provides better results than the other comparative methods. However, the values obtained are low, which means there is still room for improvement in the performance of the proposed method. Therefore, the study concludes that using metaheuristic-based algorithms to optimize techno-economic energy management strategies in microgrids is useful. The proposed method provides a more effective optimal solution than the other current approaches, with high precision and flexibility, and resistance to changes in power prices and environmental constraints. In summary, the study demonstrates the potential of metaheuristic algorithms for optimizing the performance of techno-economic energy management strategies in microgrids. The proposed Modified Chaos Grasshopper Algorithm provides a more effective solution than other current approaches. However, the study also highlights the need for further improvements in the performance of the proposed method.
Conclusions The present study proposes a modified Grasshopper Optimization Algorithm (MCGA) as a techno-economic energy management strategy (TEMS) optimization technique in microgrids. The method is tested on an integrated clean energy microgrid comprising a fuel cell, battery storage, and photovoltaic system in independent and grid-connected modes. To assess its effectiveness, the proposed strategy is compared with other optimization models such as HOMER, GAMS, GWO, and MILPA. The findings demonstrated the performance of different metaheuristic algorithms in optimizing the techno-economic energy management strategies in microgrids. The Modified Chaos Grasshopper Algorithm (MCGA) consistently outperformed other comparative methods, with a mean final state of charge (SoC) of 33.29 % and system efficiency of 87.91 %. In contrast, HOMER, GWO, MILPA, and GAMS achieved lower mean SoC values of 32.21 %, 4.75 %, 28.66 %, and 31.53 % respectively, along with lower system efficiencies ranging from 85.54 % to 85.69 %. Furthermore, MCGA exhibited the highest best-case and median SoC values of 33.28 % and 33.29 %, surpassing the other methods. The worst-case SoC value for MCGA was 33.32 %, indicating its robustness compared to the other algorithms. In terms of system efficiency, MCGA also demonstrated superiority, with the highest best-case and median values of 87.98 % and 87.91 % respectively. The worst-case efficiency for MCGA was 87.88 %. Examining the standard deviations (StD) of the cost per power values, MCGA achieved a relatively low StD of 0.0124$/kW, suggesting consistency in cost effectiveness. Similarly, MCGA exhibited a lower StD for daily cost compared to the other methods, indicating its stability in achieving cost-efficient operation. The simulation results demonstrate that the proposed approach effectively reduces the daily electricity price and optimizes the microgrid components' parameters. Compared to the alternative models, the suggested algorithm significantly improves precision, flexibility, and resistance to changes in power prices and environmental constraints. Therefore, the MCGA technique is a practical and effective solution for optimizing microgrids' techno-economic energy management strategy. Future research endeavors could explore the application of the MCGA technique in other renewable energy systems or larger-scale microgrid systems. Furthermore, this study highlights the importance of integrating renewable energy sources, implementing efficient control strategies, and adopting energy storage technologies in hybrid energy systems. While renewable sources maximize utilization and reduce reliance on fossil fuels, their intermittent nature poses certain constraints. Energy storage technologies offer enhanced capacity, longer storage duration, and improved system performance, yet high costs may impede widespread adoption. Ensuring grid integration and interconnection is crucial for scalable deployment, and cost-reduction strategies can enhance affordability. Finally, scalability and flexibility are essential for successfully implementing hybrid energy systems.
This study presents a Modified version of Chaos Grasshopper Algorithm (MCGA) as a solution to the Techno-Economic Energy Management Strategy (TEMS) problem in microgrids. Our main contribution is the optimization of parameters to minimize the overall daily electricity price in an integrated clean energy micro-grid, incorporating fuel cell, battery storage, and photovoltaic systems. Through comparative simulations with established methods (HOMER, GAMS, GWO, and MILPA), we demonstrate the superiority of our proposed strategy. The results reveal that MCGA surpasses these methods, yielding significantly improved optimal solutions for the overall daily electricity price. Notably, the MCGA approach exhibits high precision, flexibility, and adaptability to power prices and environmental constraints, leading to accurate and flexible solutions. Thus, our proposed approach offers a promising and effective solution for the TEMS problem in microgrids, with the potential to greatly enhance microgrid performance. Keywords
Nomenclature The solar panel loss factor The power temperature coefficient The output power ratio capacity for the PV panel The PV panel ratio The pv temperature under standard test conditions The standard incident radiation ( ) The hourly pv panel solar radiation incident The solar cell temperature The ambient temperature The activation voltage loss The ohmic voltage loss The concentration voltage loss The Nernst reversible voltage The gas universal constant equal to The phase-shift The dc bus accessible energy The generated energy for the grid The generated energy for the load The generated energy for the photovoltaic The generated energy for the battery The required power of the common bus The wind advection The social contact The fuel cell's operating temperature The Faraday constant which equal to The standard potential The hydrogen pressures Oxygen pressures Water partial pressures The number of the combined cells The available area Voltage source Internal resistance The battery current The potential of the battery The lithium quantity The real part of the resistance The capacity Voltage bus Electricity pricing The quantity of the power sources The pricing of the electricity market The optimal rate of the soc The cutoff frequency Damping ratio The gravity force System modeling The primary objective of the current study is to investigate a microgrid that operates at low voltage and is connected to the main power grid. The microgrid is composed of various sub-systems, including a photovoltaic system (PV), a fuel cell (FC), a Lithium-ion battery energy storage system (BESS), and a range of loads present in the building. Fig. (1) illustrates the structure of the proposed power system and the different components that are integrated into it. The exchange of electricity between the microgrid and the main power grid is carried out with the help of a transformer and a DC/AC converter. These devices are critical in ensuring a smooth and efficient flow of power between the two grids. The transformer serves to step down the high-voltage power from the main grid to a lower voltage that is suitable for the microgrid, while the DC/AC converter enables the conversion of DC power generated by the microgrid into AC power that can be utilized by the loads in the building. To regulate the power generated by both the fuel cell and photovoltaic systems, a boost converter is employed. The boost converter steps up the voltage of the DC power generated by these systems to a level that is appropriate for use in the microgrid. This enables the microgrid to operate efficiently and ensures that the power generated by these systems is effectively utilized. On the other hand, the power from the battery is managed using a buck/boost converter. This converter is used to regulate the voltage of the battery's DC power, enabling it to be efficiently integrated into the microgrid. The converter ensures that the power from the battery is utilized effectively and that any excess power generated by the microgrid is stored in the battery for future use. Consequently, the integration of these various sub-systems and converters into the microgrid enables it to operate more efficiently, providing reliable and sustainable power to the loads present in the building. The current study aims to explore the performance of this microgrid in various operating conditions and to identify opportunities for further improvements. In the following, the mathematical modeling of the system components has been explained. Mathematical model of the PV system Solar cells are semiconductor devices that use the photovoltaic effect to turn sunlight into electrical energy. This technology has transformed power generation by providing a renewable and sustainable alternative to traditional energy sources like oil, gas, and coal. When solar cells are joined together, they form a solar panel or module, which may generate electricity for a variety of uses, including powering homes and businesses and delivering energy to rural places that do not have access to the power grid. The mathematical model for the total power generated by the solar panel, which is a function of solar radiation and atmospheric conditions, is expressed as: where, describes the solar panel loss factor due to the dirt, shadow, and temp, specifies the power temperature coefficient, signifies the output power ratio capacity for the PV panel, determines the PV panel ratio, describes the PV temperature under standard test conditions, and represent, in turn, the standard incident radiation ( ), and the hourly PV panel solar radiation incident. here, is the solar cell temperature that is obtained as follows: where, specifies the ambient temperature. Fuel cell (FC) system Eq. (3) has been commonly used to evaluate the performance of a single Solid Oxide Fuel Cell (SOFC) [ 4 ]. This equation enables us to quantitatively measure the voltage output of a SOFC based on its oxygen partial pressure, hydrogen partial pressure, and operating temperature. By using this Equation, we can effectively analyze the output voltage of a SOFC and assess its efficiency. where, describes the activation voltage loss, stands for the ohmic voltage loss, and determines the concentration voltage loss. Here, the term specifies the Nernst reversible voltage and is achieved by the following equation: where, specifies the gas universal constant equal to , signifies the fuel cell's operating temperature, determines the Faraday constant which equals , the standard potential is defined by , and , , and describe, in turn, the hydrogen, oxygen, and water partial pressures. The concentration voltage loss of the fuel cell can be mathematically defined by the following equation: The ohmic voltage loss and the activation voltage loss can be achieved by the following equations: where, the gradually decreasing ionic resistance that relates to temperature increase, is defined by the term [ 31 ]. Then, the current density of the element can be achieved by the following equation based on ButlereVolmer [ 32 ]: Here, , , and represent, in turn, the transfer factor, the moved electron moles, and the exchange current density. This paper assumes that [ 33 ]. Finally, the main output voltage of the fuel cell can be achieved by the following equation: where, signifies the number of combined cells. And the output power of this element can be achieved by the following equation: where, describes the available area. Lithium-ion BESS Due to its high energy density, extended lifespan, and quick charging capabilities, lithium-ion batteries are a preferred choice for microgrid and hybrid systems. They give the system a dependable source of energy storage, guaranteeing a constant flow of electricity independent of the weather or other environmental factors. As a result, they may be used in standalone, hybrid, and microgrid systems. They offer a cost-effective energy storage alternative that may assist lower power bills and achieve renewable energy targets. They are also perfect for peak flake and demand response applications. A mathematical model of battery cells are formulated using two components: a voltage source ( ) and internal resistance ( ). The voltage equation for a battery cell incorporates parameters such as charge state and temperature. where, specifies the battery current, describes the potential of the battery, determines the lithium quantity, and determines the real part of the resistance , and is achieved as follows: where, represents the phase shift. System output The relationship between the DC bus accessible energy ( ) and the transferred power may be represented as a function, as depicted in Fig. (1) . where, , , , and represent, in turn, the generated energy for the grid, photovoltaic, battery, and load. Similarly, the DC energy bus has been achieved as follows: where and describe, in turn, the capacity and voltage bus. A strategy for the optimal operation of a microgrid is being implemented on the central controller, which will consider load power, source operating costs ( ), and electricity pricing ( ). Additionally, this energy management system must also ensure adequate stability and power quality. Fitness function The photovoltaic power generated was insufficient to meet the required power, which resulted in a combination of power sources, including grid power, fuel cell, and battery, as determined by the chosen energy management strategy. The microgrid switched to grid mode for financial benefits, with grid power being the primary source of power unless the load exceeded the microgrid's power resources or the operating costs of these resources were excessive. This situation could be viewed as an optimization problem, where the aim is to minimize operating costs. The fitness function of the optimization problem can be represented by the following equation: where, and represent, in turn, the illustrates the total period, and the sample time, the quantity of the power sources, represents the pricing of the electricity market, , , and specify, in turn, the cost value of the DG, battery operation, and the fuel cell. In comparison with existing research, the proposed approach introduces a new objective function in the optimization problem. The objective function in equation (15) represents the minimization of the overall cost of the microgrid's electricity consumption over a defined time period. The novelty of this objective function lies in its comprehensive consideration of multiple cost factors associated with different energy sources and grid electricity. Specifically, the objective function accounts for the costs related to fuel cell ( ), battery storage ( ), and grid electricity ( ). By including these cost components, the proposed approach offers a more holistic perspective on the optimization problem. Previous research often focused on individual cost factors, such as minimizing the use of grid electricity or optimizing the operation of specific energy sources. In contrast, our proposed approach takes into account the combined impact of various cost factors, allowing for a more accurate representation of the real-world operational costs of a microgrid. This novel objective function enables the optimization algorithm to identify an optimal combination of energy generation and utilization strategies that minimize the overall cost. By considering multiple cost elements simultaneously, our approach offers a more comprehensive and realistic solution to the energy management problem in microgrids. The function had several restrictions, such as power capacity and power equilibrium. To ensure power balance, the total power supplied by the resources must be equal to the load power for every time interval , assuming no microgrid losses. Therefore, the power equilibrium constraint can be expressed by the following: Moreover, power has to be generated within a certain capacity by the production units which possess their production means: Eq. (18) represents the operational cost of each generator as a quadratic equation with a single variable. where, , , and define the cost coefficients. The fitness function, therefore, places constraints on the output of the fuel cell and battery for power producers, which are as follows: This research studied how the state of charge of the battery can be integrated into the energy management strategy (EMS) to prevent severe drain or overcharge of the battery. A proposed fitness function was used to evaluate the battery's state of charge. This fitness function takes into account the power reference set forth by the energy management strategy, as well as microgrid sharing of the battery energy. It is important to note that some prior studies concerning an economic EMS did not include the battery state of charge in their analyses. where, specifies the optimal rate of the SOC. In this study, the optimization variables were the generator power references (including core grid power reference). To ensure power balance, the main grid power reference was not used as an optimization parameter. A revised version of the Grasshopper optimization algorithm has been proposed in this research to solve this problem. The photovoltaic cells, main grid power, and fuel cell power were employed to meet the demands, with the principal purpose of the battery being to maintain constant bus DC voltage. The flatness control approach can be used to stabilize the voltage and equation (22) can be utilized to determine the battery power reference: where, represents the required power of the common bus and is here obtained by the 2nd-order trajectory generation equation as follows: where, the and represent the trajectory generation coefficients and are attained as follows: where, and specify the cutoff frequency and damping ratio, and . Modified chaos grasshopper optimization algorithm The standard grasshopper optimization algorithm The grasshopper optimization algorithm (GOA) is new that is based on the group optimization process and that is inspired by the grasshopper insect's manner. The individual contains a collective of grasshoppers which is named a group in this process. every organ of the swarm is a probable solution to the problem. The initial stage is commenced by producing a random group that is the primary problem's explanation. Then, every grasshopper's value is specified by gaining the objective value's cost. The procedure is ongoing with attacking the group via selected grasshoppers into their place to imbibe the grasshoppers to transfer into the selected grasshopper. In this present study, the grasshoppers' 2 major manners are studied: the larval grasshoppers’ sluggish and ticonsiderny motion with the aim of the long-range and no subsequence motion move matures, also nourishment probing procedure which is separated into 2 distinct portions of use and search. describes the grasshopper's improvement away from the objective grasshopper that is calculated by the next Equation: where, , and define the wind advection, the social contact, and the gravity force on the ith grasshopper, respectively. , , and are random coefficients that are between 0 and 1. The grasshopper social contact ( ) is connected to the social forces that are amid 2 grasshoppers and is an excretion force to end encounters and an absorption force aim over a minor distance scale. where, determines the Euclidian's length of the i th with the j th place grasshopper, and signifies the current unit vector among the i th grasshopper and the j th grasshopper. The next equation illustrates the severity absorption power: where N is the quantity of grasshopper, and are upper and the lower restrictions in the dth dimension, respectively, does aim measure of dth dimension by the aimed grasshopper, c designates the decreasing element to the suitable area of the excretion and absorption region, and are the max and the min cost of c, respectively, and finally, l and L are the present repetitions and the repetition's complete number, respectively. Where S indicates the social forces’ strong points that are estimated by the next formula: where, l determines the absorption's length scale, and fi is the severity force of absorption power. G is the process's parameter that is obtained as follows: where, G i indicates a constant factor for the gravity force, and a direction for the unity vector beside the wind is described by A1 finally, the wind advection ( W i ) is gained as follows: where the drift constant factor is determined by u . Modified chaos grasshopper optimization algorithm For enhancing the speed of convergence in this method, a method is planned for adapting the GAO's crucial parameters. The GAO convergence's vital parameters are u, b 1 , b 2 , and b 3 the adjustment is applied on the basis of chaos theory. Chaos theory is the research of changeable and uncertain procedures. The chaos theory's central idea is to study the very susceptible dynamic organisms that any tiny variants could influence it. To the explanations that are mentioned above, a major variety could be created for the group generation in GOA to enhance the algorithm's variety. This portion could enhance the GOA algorithm's ability to the speed of convergence and for evasion from dropping to the local optimal point [ 34 , 35 ].(Yang et al., 2007, Rim et al., 2018)(Yang et al., 2007, Rim et al., 2018)(Yang et al., 2007, Rim et al., 2018)(Yang et al., 2007, Rim et al., 2018)(Yang et al., 2007, Rim et al., 2018) [ 10 , 10 , 10 , 10 , 11 , 11 , 11 , 11 ]A common formula for the chaos theory is shown as follows: where the dimension of the map is demonstrated by k , and the disordered plan producer function is described by . In the existing GOA, the parameters u , b 1 , b 2 , and b 3 are formulated based on the Kent map as the next equation: where, is a variable between 0 and 1. Here, A1 Fig. (2) illustrates the offered CGOA's flowchart. To further demonstrate the efficacy of the proposed MCGA, four unimodal and multimodal test functions were employed, and the results were compared with those yielded by certain benchmark algorithms, such as Pigeon-inspired Optimization Algorithm (PIO) [ 36 ], Squirrel search algorithm (SSA) [ 37 ], Spotted hyena optimize (SHO) [ 38 ], Lion optimization algorithm (LOA) [ 39 ], and basic GOA [ 40 ]. The detailed parameter settings used for all optimized algorithms are shown in Table 1 . In many cases, benchmarks are used to evaluate the performance of algorithms and their ability to solve specific problems. As such, the benchmarks listed in Table 1 likely played an essential role in demonstrating the effectiveness and practicality of the algorithms being evaluated. Table 2 provides a comprehensive summary of the benchmarks that were used for validating various algorithms. It includes a range of details related to these benchmarks, such as their names, mathematical formulation, and constraint. Accurate and comprehensive validation is critical to establishing the credibility and usefulness of any algorithm, and Table 2 provides important information about the benchmarks used to support this validation process. Table 3 indicates the validation of the MCOA algorithm compared to the other analyzed approaches. The results in Table 3 show that, among all the algorithms, MCGA has achieved the lowest minimum value and the highest median value, indicating that it is the most effective algorithm for this problem. In addition, MCGA has a lower standard deviation value than other algorithms, which further confirms its robustness and efficacy. Moreover, DE has achieved a better maximum value than other algorithms, but its median value was not as good as the others, suggesting that the performance of DE was not consistent across runs. The convergence index is a commonly used metric to evaluate the performance of optimization algorithms. This index measures how quickly an algorithm can reach the global optimum solution of a given problem. The faster an algorithm converges to the global optimum, its performance improves. In optimization algorithms, individuals represent candidate solutions evaluated and modified iteratively in search of the optimal solution. During the initial phase of the optimization process, individuals explore the entire search space quickly to identify potential solutions. As the optimization continues, individuals are refined, and the search space is narrowed to the most promising regions. Fig. (3) shows the results of the convergence of different algorithms on standard functions. The convergence curve represents the rate at which the algorithm converges to the global optimum solution. The improved MCGA exhibits superior performance in convergence speed and exploration-exploitation balance compared to other optimization algorithms. Its goal is to prevent early convergence and promote rapid convergence, making it a promising solution for optimization problems. The convergence curve in Fig. (3) shows rapid convergence during initial iterations, indicating individuals exploring the search space at a faster rate. This allows the improved MCGA to identify potential solutions more efficiently and effectively, leading to faster convergence. The results suggest that the MCGA algorithm outperformed other algorithms being compared to it, achieving the minimum value for all the analyzed test functions. This indicates that the algorithm was successful at finding the optimal solution or best possible output for each test function. Moreover, the statement highlights another key aspect of the algorithm's performance, the standard deviation. By comparing the standard deviation of MCGA with other algorithms, it was found that MCGA has the lowest standard deviation. In other words, the algorithm's output was relatively consistent across multiple runs, and the variation between different iterations was minimal. This is a positive indication as it signifies that the algorithm was able to converge to the optimal solution effectively without getting stuck in a suboptimal solution, which could lead to high variance. CRediT authorship contribution statement Zhiyu Yan: Writing - original draft, Formal analysis, Data curation. Yimeng Li: Writing - review & editing, Writing - original draft, Software, Conceptualization. Mahdiyeh Eslami: Writing - review & editing, Writing - original draft, Software, Resources, Formal analysis, Data curation. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
CC BY
no
2024-01-16 23:43:51
Heliyon. 2024 Jan 3; 10(1):e23980
oa_package/ac/63/PMC10788810.tar.gz
PMC10788813
38226209
Introduction Semantic TRIZ (S-TRIZ) was pioneered by Verbitsky [ 1 ] who semantically combined the meaning of items with traditional TRIZ theory to develop problem and solution patterns. Text mining (TM) and natural language processing (NLP) techniques have the highest relevance for extracting proficient information that is generally applied to classification-related matters [ 2 ]. Moreover, technology forecasting (TF) embeds various methods, disciplines, and concepts with combined normative and exploratory methods to determine significant relationships between data [ 3 ]. TRIZ, the Russian abbreviation for “Theory of Inventive Problem Solving”, was invented by G. Altshuller's team in the nineteenth century after analysing 40,000 technology patents which revealed a set of patterns for technological evolution [ [4] , [5] , [6] , [7] ]. TRIZ evolution trends, such as quantitative [ 8 ] and exploratory [ 3 ] methods have been effectively applied to TF in various fields of technology [ 5 , [9] , [10] , [11] ]. Likewise, in the current era of Industry 4.0, technology development and innovation are undoubtedly essential for organizations across the globe. Nevertheless, it demands a swift response in terms of its digital transformation, as Industry 4.0 alters the major operational systems related to product design, which includes linkages between functional decomposition and morphology (FDM) with TRIZ [ 12 ], processes, and services [ 13 ]. The use of S-TRIZ within patents has been gaining attention in the automatic extraction of knowledge and information for discovering issues with TRIZ tools [ 3 , 14 ]. In addition, various studies have shown a positive attitude towards automating patent classification by focusing on topic modelling [ 15 ] and automating the process for collecting, analyzing, extracting, and interpreting patents using big data techniques [ 16 , 17 ]. The significance of automating and simplifying the analysis of patent documents has gained attention among TRIZ users for several years [ 18 ], and this research aims to present a holistic development of what has been achieved so far. This reflects the motivation of the authors to present a comprehensive review of the benefits of TRIZ practitioners and researchers. This research was conducted to review existing research on S-TRIZ with respect to text mining techniques to help TRIZ developers and researchers maneuver through huge amounts of technical literature [ 19 ] so that it can be converted to practical applications of TRIZ tools [ 7 ]. Accordingly, a brief report will be presented for each research study, followed by a discussion of the obtained results. Kitchenham, Brereton [ 20 ] were adapted to conduct a systematic literature review (SLR) in this study. With reference to the first SLR step, which is “data collection”, the search of the reviewed papers was conducted in the time frame of January 2009 to March 2022, and they were then saved on a local reference manager. In this study, the SLR search was mostly focused on S-TRIZ analysis that used data analysis techniques. Finally, 57 papers were assessed based on the SLR inclusion and exclusion criteria followed by a quality assessment. The contributions of this study are as follows. • To determine the research on S-TRIZ methodology in terms of data analytics that has been studied between January 2009 and March 2022 (12 years). • To briefly describe the methods and techniques used for developing and evaluating the S-TRIZ. • To highlight the limitations of S-TRIZ methods in technology development, innovation, and production. TRIZ is classified under systematic innovation and a subset of innovation methods [ 21 ]. Recently, Sheu, Chiu [ 7 ] presented an updated TRIZ hierarchy that included tools and techniques, methods, and philosophy (with seven pillars), as illustrated on the left side of Fig. 1 Inventors currently apply various TRIZ tools [ 22 ] to develop technical systems in which patent analysis is a critical step to meet this objective. Patents provide monopolized knowledge about the technical system. Different methods have been suggested to analyze patent information, such as identifying new technologies, assessing R&D activities, benchmark analysis, ranking patents, and retrieving prior art [ 23 ]. Furthermore, patent analysis or mapping based on training materials requires function recognition, keyword searching, document segmentation, abstraction identification, data clustering, result visualization, and data interpretation [ 24 ]. Additionally, some common patent analysis techniques exist based on technologies which highlight bibliometric information, the backward and forward relationship among patents citation, statistical approaches, and classification methods [ 25 ]. The key features for all types of patent analysis tools that should be considered to satisfy user expectations are listed as follows [ 26 ]. • Capability to search and find most related patent in database. • Reliability to process unstructured texts and transform to structure format. • Ability to apply different techniques to extract most related information. • Capability to interact robustly within database during analysis. • Ability to synchronize with other tools to communicate data. • Creation of friendly interface including multi-option facility for users. Fig. 1 indicates the effective text mining procedure in different types of documents to analyze and develop the TRIZ theory. It commences by accessing a patent database such as USPTO or scientific databases such as the Web of Science, followed by preparing text for syntactic or semantic analysis. In the next step, the text to be tokenized, lemmatized/stemmed, and transferred in the form of features is understood by the machine. After the features have been selected and trained by machine learning (ML) algorithms, the outputs are classified or clustered. The results were interpreted accordingly. Therefore, bridging the available literature to the existing implementation gap on document analysis, especially related to patents in the context of text mining, would be beneficial for TRIZ practitioners to understand the text mining applications in their product development and innovation processes [ 7 ]. However, this would be vital for data scientists to further enhance existing patent analysis based on TRIZ tools [ 2 ]. TRIZ is used to offer new ideas and solutions [ 27 ]; however, application of computer-aided techniques to develop TRIZ needs to be considered further. In this study, further steps are organized into four sections. Section 2 explains how research was conducted based on the SLR process. Section 3 provides a brief discussion, highlights the results, and explicitly responds to research questions. Finally, Section 4 presents the research limitations and conclusions.
Research method Multiple SLRs have been derived for various papers published in different fields [ 28 ] and have been validated as a means to objectively diagnose and scrutinize the research issue. Reviewing S-TRIZ systematically provides an opportunity to develop efficient TRIZ tools. As the term implies, a systematic literature review identifies, evaluates, and filters relevant research publications related to pre-determined research questions to provide a detailed overview to researchers and scholars [ 28 ]. Fig. 2 illustrates the methodology used to conduct a systematic review of S-TRIZ. The following are the steps in detail. Define research questions To conduct this study, the following research questions were designed for further analysis. RQ1 What studies were conducted from January 2009 to March 2022 in the S-TRIZ? Our goal for this question is to identify research articles that are most pertinent to advancing TRIZ with AI. RQ2 What techniques or methods have been used for S-TRIZ? This question aims to provide a comprehensive understanding of the selected articles by elucidating the specific TRIZ components developed and the application of AI in their development. RQ3 How can S-TRIZ facilitate the technology development, innovation, and production? Through this question, we aim to uncover the benefits of integrating specific AI technologies with particular TRIZ components, shedding light on the advancements in engineering systems. RQ4 What are the state-of-art limitations for S-TRIZ? By addressing this question, we identify the current limitations in the automation of development, innovation, and production through the integration of AI and TRIZ, taking into consideration the constraints and challenges highlighted in existing studies. Select research strategy A conventional research method was undertaken to perform a comprehensive study of existing research resources by predefining the exploratory steps to discover all related and available research by adopting structured research criteria. A systematic literature review was formulated using a structured research procedure to identify the essential materials for every study. Therefore, while considering the necessary SLR protocols, this research takes cohesive action to identify associated papers during a particular period of publication in popular repositories. The selected keywords were associated with TRIZ and patent analysis in terms of text mining in various repositories by placing high emphasis on the research questions. The repositories used as sources were delineated in Table 1 . Define research query Searching for relevant articles is a paramount initial stage in conducting SLR. High-quality inputs are required to harvest high-quality outputs. Therefore, the application of appropriate keywords would lead to obtaining the most relevant articles that meet the scope of the research. The following research keywords were opted for by scrambling the words in permute as “TRIZ” AND (patent OR “NLP” OR “natural language processing” OR “text mining” OR “evolution trends” OR “trend analysis” OR "technology forecast"). The selected query was used to collect information from articles in different publisher repositories. Nevertheless, the keywords were slightly modified based on search engine syntax criteria for various resources. The process of conducting this study is illustrated in Fig. 2 . Research options in all resources were left as default to include all types of publications, such as books, journal articles, and others, within the time frame of the studies. It also highlights the duplication removal and filtration steps, as shown in Fig. 2 . In addition, Fig. 3 illustrates the detailed procedures applied in searching the articles. Duplication removal and inclusion and exclusion strategies Employing the proposed research query in these repositories has resulted in a collection of numerous publications. If we take the example of the Web of Science website loaded under the license of the UKM library, the research query was inputted in the search bar, and the year was chosen according to the research inclusion criteria. The total bibliographic information was extracted in RIS format to export the papers into Endnote reference manager software [ 29 ] for ease of management and handling. Accordingly, all these processes were repeated manually in all selected repositories. Endnote has a user-friendly interface that not only displays the article title but also provides easy access to the bibliographic details, groups the references, populates the full text if available, exports them to the CSV file, and demonstrates several helpful capabilities. Fig. 2 shows that the research steps were conducted holistically, and the details of the initial search and filtration (title, abstract, and content) based on the inclusion and exclusion criteria are depicted. The inclusion metrics that were diagnosed for this research are listed in Table 2 . Exclusion criteria encompass papers falling outside the specified publication range, those lacking relevance to the research questions, written in languages other than English, or published in formats such as books or conference proceedings. Selection of the relevant papers among the collection of articles was performed in three phases diligently and meticulously. In the initial phase, relevant articles were selected by considering their title relevancy. In the second phase, selected articles from the previous phase were reviewed by reading the abstracts. In the last phase, the remaining articles were carefully read by considering the inclusion and exclusion metrics to identify the most relevant articles. Defined metrics for quality assessment To ensure the accuracy of the selected articles and their consistency in meeting the research criteria, the assessment must be conducted effectively. Therefore, the SLR method recommends quality assessment for all publications that were included in the final filtration phase. First, it is essential to define quality metrics for the corresponding research questions. Subsequently, all the evaluated and included publications will be verified whether they match the research questions. The key metrics used to assess this research were as follows. Metric 1: The chosen publication presents descriptions corresponding to the research questions. Metric 2: The selected publication provides in-depth information about the techniques used for TRIZ-based data analytics. Metric 3: The chosen publication explains evaluation techniques for TRIZ-based data analytic. Metric 4: The chosen publication explains benefits or limitations of TRIZ-based data analytic. With this, authors reviewed the chosen publications with reference to the quality assessment metrics. Evaluations of the chosen publications were based on the scoring point of “1” if all the research questions were addressed in the publication, point “0.5” if the research questions were partially clarified and point “0” if there were no explanations provided to address the research questions. Assigned points were then added up to determine the total score for each of the publication. The total score would be a robust value to evaluate and assess the chosen publication corresponding the research questions. Chosen publications further grouped by total score range as shown in Fig. 4 for reliable measure of evaluation. Assessment 1: Out of the 57 papers, 30 provide comprehensive descriptions addressing the research questions, while the remaining publications offer only partial explanations regarding the integration of TRIZ components with AI. Assessment 2: A substantial majority, 53 out of 57 papers, thoroughly elucidate the application of AI techniques in the development of TRIZ, showcasing a comprehensive understanding of the AI methodologies employed. Assessment 3: Focused on the evaluation aspect, 28 papers utilize AI techniques for thorough result assessments, indicating a robust approach. Additionally, 15 papers provide partial evaluations, some of which solely rely on TRIZ techniques. Assessment 4: Regarding benefits or limitations, only 12 papers distinctly discuss either the advantages or constraints of TRIZ-based data analytics. Conversely, the majority, comprising 31 papers, lacks explicit information on this aspect. Output achievement The output data in the proposed research were obtained after the assessment process was undertaken on the chosen publications. After fulfilling all SLR requirements, the output achievement is presented in a structured table for better comprehension. The summary is as follows. 1) Fig. 5 shows the frequency of journals that published TRIZ-based data analytic publications to reflect the importance and role of this research area. 2) Table 4 shows the output from the selection process based on the research criteria, which acts as the fundamental component for retrieving highly relevant articles [ 20 ]. It presents the number of articles selected by year and illustrates the patent analysis process based on TRIZ tools using various algorithms and methods. 3) The rest of the paper explains details about information presented in Table 4 . Research argument synthesis The synthesis was performed to concisely link the research materials. The data extraction procedure is explained extensively in Section 2.4, with a final selection list of 57 publications. Fig. 2 illustrates the research framework according to the research questions, queries, criteria, and outputs. By deep diving into the presented SLR framework, 1136 publications were initially collected in the Endnote dataset. The collected data were then verified individually to ensure that bibliometric information was downloaded properly. Specifically, the initial collection process included several duplications, and some reference sections were missing. Therefore, the collections were cleaned up by removing repeated publications and filling up and correcting the reference sections such as keywords, abstract, publication year, and authors’ name. Subsequently, a group set was created in Endnote with three subset folders named (filtration by title, filtration by abstract and filtration by content) to manually perform the selection stages. At this stage, 428, 231, and 106 publications were obtained for each stage, respectively. Filtering papers based on the exclusion and inclusion criteria is tricky, overwhelming, and time-consuming. In addition, 106 selected papers were extracted into a CSV file for statistical analysis. CSV files allow numerical and textual data to be saved in a structured tabular format for further analysis. Finally, the articles chosen and cited manually in the next section were used to define the proposed research questions and analyze them in detail.
Results and discussion Implementing S-TRIZ appropriately has impressive performance in decision-making in R&D projects and industrial development. R&D strategy and management planning include emerging science and technology; forecasting technology; managing innovation; planning product-oriented technology; studying the correlation between science, technology, and innovation; identifying potential opportunities; classifying patents; developing new products; solving problems; and evolving technology. In this study, the title, abstract, and keywords of selected articles in Table 4 were analyzed with VOSviewer and are illustrated in Fig. 12 , which shows the frequency probability of two terms occurring simultaneously. Fig. 12 clearly portrays the interconnection of TRIZ with TM and NLP techniques in various R&D studies. Accordingly, the details of the literature review provided in this study will help engineers, designers, domain experts, and innovators on different measures to apply NLP techniques in defining TRIZ concepts semantically and how to improve the process of analysis automatically. Although some scholars believe that TRIZ concepts are complicated to learn, the results of this study justify the application of ML and NLP, which can eliminate barriers. Various TRIZ tools and continuous enhancements by engineers over the past years have increased their significance in developing engineering systems and furcating technologies. However, there is a lack of research on the development of various TRIZ tools for integrating ML and NLP. This means that the research has limited the general concepts of problem-solving definitions, inventive principles, level of invention, and contradiction matrices that are not adequately developed. For instance, the TRIZ evolutionary trend needs to be improved and developed with respect to the automation process with ML techniques and integrated with various data resources [ 122 ]. TRIZ-evolutionary approach has the potential to track development of a system from contradiction to contradiction and provide high-performance solutions by eliminating contradictions [ 123 ]. However, this study implies that most research conducted so far strongly relies on manual intervention by experts. This is further supported by Ref. [ 124 ] who indicated that TRIZ fundamentally relies on human cognitive mechanisms with less involvement of digital intervention. Furthermore, TRIZ is more skewed towards empirical evidence with a lack of emphasis on scientific theories, and it portrays a lack of comprehensive development to meet the evolving requirements of TRIZ users. Regarding miscellaneous data resources, the results indicate that patents are the main resources in TRIZ projects. However, there are several limitations, such as studies that combine various datasets within patent fields, or combination of different types of datasets, such as web and social networks. Another limitation is the linguistic diversity of the patent databases. ML techniques are widely applied in pattern discovery to facilitate research and automate some aspects of studies. Most studies are limited to a specific case study, indicating that they cannot be generalized across other case studies. This was validated by Ref. [ 81 ] which illustrated that there is a dire need to increase research on TRIZ and neural networks to accommodate the collection of training data and to create synergy between neural networks and TRIZ. This study presents the techniques and algorithms that have been applied in some areas but are yet to be applied comprehensively. For instance, the classification of patents based on TRIZ concepts requires more experiments in the case of supervised and unsupervised learning. Although these techniques enable scholars to analyze huge amounts of data through big data analysis, there is no solid framework in this area. Additionally, there is still the limitation of full sentence studies, where most of them are restricted to keyword-based or SAO-based studies, which may lead to misinterpretation in certain cases. Evaluation and interpretation of studies in some efforts were unique and required expert-based knowledge. In some cases, TRIZ domain experts were fundamental, and in other instances, computer science experts were pivotal in reviewing the assessments. Automation of the evaluation process should be considered in future research. In addition to the scope that has been discussed in this review paper, there are various ongoing related works that researchers have put forward at various conferences. Ni, Samet [ 125 ] proposed patent ranking method to achieve inventive solutions from different domains by using LSTM neural networks and XLNet neural networks in the NLP field. In another study, inventive design method matching was introduced in combination with XLNet to construct links between problems and partial solutions [ 126 ]. In another study, TRIZ reasoning was reproduced using deep learning techniques to replace the lack of scientific theories in the implementation of TRIZ articulated in Ref. [ 127 ]. To prioritize the initial problem in the early phase of inventive design, Hanifi, Chibane [ 128 ] applied integration of failure mode effect analysis (FMEA) into the IPG method. Guarino, Samet [ 86 ] presented a semi-supervised idea as a patent generative adversarial network to combine multilevel classifiers (sentences and documents) to improve the performance of information extraction from patents. To facilitate the application of the TRIZ contradiction matrix, Berdyugina and Cavallucci [ 129 ] utilized the antonym identification technique to automatically extract potential contradictions within a patent. Additionally, a new approach was developed to present a contradiction matrix corresponding to the technical field in real-time by applying NLP techniques within a patent [ 130 ]. Berduygina and Cavallucci (2020) discussed an automated method for extracting IDM-related information using NLP was discussed by Ref. [ 131 ]. To automate the technical feature extraction of the TRIZ contradiction matrix, Zhai, Li [ 118 ] suggested the Doc2Vec model to create the semantic space of patent text. The accuracy of their model was 87%, which reflects an improvement in comparison with the baseline model. Yu [ 132 ] adopted hierarchical structured LSTM for TRIZ-Based Chinese patent classification and compared the results with bidirectional encoder representations from transformers (BERT) and other ML algorithms. The results illustrate improvements in “innovation in product design” classification tasks in area under curve score, as opposed to other models.
Results and discussion Implementing S-TRIZ appropriately has impressive performance in decision-making in R&D projects and industrial development. R&D strategy and management planning include emerging science and technology; forecasting technology; managing innovation; planning product-oriented technology; studying the correlation between science, technology, and innovation; identifying potential opportunities; classifying patents; developing new products; solving problems; and evolving technology. In this study, the title, abstract, and keywords of selected articles in Table 4 were analyzed with VOSviewer and are illustrated in Fig. 12 , which shows the frequency probability of two terms occurring simultaneously. Fig. 12 clearly portrays the interconnection of TRIZ with TM and NLP techniques in various R&D studies. Accordingly, the details of the literature review provided in this study will help engineers, designers, domain experts, and innovators on different measures to apply NLP techniques in defining TRIZ concepts semantically and how to improve the process of analysis automatically. Although some scholars believe that TRIZ concepts are complicated to learn, the results of this study justify the application of ML and NLP, which can eliminate barriers. Various TRIZ tools and continuous enhancements by engineers over the past years have increased their significance in developing engineering systems and furcating technologies. However, there is a lack of research on the development of various TRIZ tools for integrating ML and NLP. This means that the research has limited the general concepts of problem-solving definitions, inventive principles, level of invention, and contradiction matrices that are not adequately developed. For instance, the TRIZ evolutionary trend needs to be improved and developed with respect to the automation process with ML techniques and integrated with various data resources [ 122 ]. TRIZ-evolutionary approach has the potential to track development of a system from contradiction to contradiction and provide high-performance solutions by eliminating contradictions [ 123 ]. However, this study implies that most research conducted so far strongly relies on manual intervention by experts. This is further supported by Ref. [ 124 ] who indicated that TRIZ fundamentally relies on human cognitive mechanisms with less involvement of digital intervention. Furthermore, TRIZ is more skewed towards empirical evidence with a lack of emphasis on scientific theories, and it portrays a lack of comprehensive development to meet the evolving requirements of TRIZ users. Regarding miscellaneous data resources, the results indicate that patents are the main resources in TRIZ projects. However, there are several limitations, such as studies that combine various datasets within patent fields, or combination of different types of datasets, such as web and social networks. Another limitation is the linguistic diversity of the patent databases. ML techniques are widely applied in pattern discovery to facilitate research and automate some aspects of studies. Most studies are limited to a specific case study, indicating that they cannot be generalized across other case studies. This was validated by Ref. [ 81 ] which illustrated that there is a dire need to increase research on TRIZ and neural networks to accommodate the collection of training data and to create synergy between neural networks and TRIZ. This study presents the techniques and algorithms that have been applied in some areas but are yet to be applied comprehensively. For instance, the classification of patents based on TRIZ concepts requires more experiments in the case of supervised and unsupervised learning. Although these techniques enable scholars to analyze huge amounts of data through big data analysis, there is no solid framework in this area. Additionally, there is still the limitation of full sentence studies, where most of them are restricted to keyword-based or SAO-based studies, which may lead to misinterpretation in certain cases. Evaluation and interpretation of studies in some efforts were unique and required expert-based knowledge. In some cases, TRIZ domain experts were fundamental, and in other instances, computer science experts were pivotal in reviewing the assessments. Automation of the evaluation process should be considered in future research. In addition to the scope that has been discussed in this review paper, there are various ongoing related works that researchers have put forward at various conferences. Ni, Samet [ 125 ] proposed patent ranking method to achieve inventive solutions from different domains by using LSTM neural networks and XLNet neural networks in the NLP field. In another study, inventive design method matching was introduced in combination with XLNet to construct links between problems and partial solutions [ 126 ]. In another study, TRIZ reasoning was reproduced using deep learning techniques to replace the lack of scientific theories in the implementation of TRIZ articulated in Ref. [ 127 ]. To prioritize the initial problem in the early phase of inventive design, Hanifi, Chibane [ 128 ] applied integration of failure mode effect analysis (FMEA) into the IPG method. Guarino, Samet [ 86 ] presented a semi-supervised idea as a patent generative adversarial network to combine multilevel classifiers (sentences and documents) to improve the performance of information extraction from patents. To facilitate the application of the TRIZ contradiction matrix, Berdyugina and Cavallucci [ 129 ] utilized the antonym identification technique to automatically extract potential contradictions within a patent. Additionally, a new approach was developed to present a contradiction matrix corresponding to the technical field in real-time by applying NLP techniques within a patent [ 130 ]. Berduygina and Cavallucci (2020) discussed an automated method for extracting IDM-related information using NLP was discussed by Ref. [ 131 ]. To automate the technical feature extraction of the TRIZ contradiction matrix, Zhai, Li [ 118 ] suggested the Doc2Vec model to create the semantic space of patent text. The accuracy of their model was 87%, which reflects an improvement in comparison with the baseline model. Yu [ 132 ] adopted hierarchical structured LSTM for TRIZ-Based Chinese patent classification and compared the results with bidirectional encoder representations from transformers (BERT) and other ML algorithms. The results illustrate improvements in “innovation in product design” classification tasks in area under curve score, as opposed to other models.
Conclusion In the culmination of our study, a thorough and comprehensive systematic literature review on S-TRIZ analytics has unfolded, highlighting the imperative for in-depth exploration within the realms of TRIZ domains and pivotal concepts, including philosophy, methodology, and tools. This research underscores the critical intersection of insights from both TRIZ experts and the realm of data analytics. With a clarion call for advancement, we advocate for the refinement of existing models and methodologies. This pursuit aims not only to foster practical development, innovation, and production but also to empower engineers seamlessly integrating computer-aided techniques with the rich tapestry of TRIZ principles. Additionally, we engage in an extensive exploration of the limitations and challenges inherent in S-TRIZ development. While TRIZ serves as a valuable guide for accessing creative solutions, its efficacy is contingent on process automation for user-friendly applications. Notably, 62% of studies centre on existing TRIZ tools, underscoring the necessity to not only refine existing tools but also prioritize the development of essential tools, such as TESE. The diversity of databases, ranging from patent resources like USPTO to academic research and online information, highlights the critical need for their integration and analysis with AI. Although studies indicate a preference for syntactic and keyword-based analyses over sentence-based SAO analyses, advancements in NLP and AI, exemplified by BERT, signal a transformative shift. The selection of ML and AI techniques remains a nuanced challenge, emphasizing the need for careful consideration in specific tasks. Lastly, the most intriguing facet lies in Interpretation and Evaluation, where visualization techniques, including graph-based diagrams, and verification assessments, such as accuracy and precision, are widely applied. Finally, S-TRIZ, as an integration of computer-aided techniques conforming with TRIZ concepts, demonstrates applicability in conceptualizing innovation across interdisciplinary fields such as auto-remanufacturing, sustainability, recycling, cost-effective production, and robotics.
The study unfolds with an acknowledgment of the extensive exploration of TRIZ components, spanning a solid philosophy, quantitative and inductive methods, and practical tools, over the years. While the adoption of Semantic TRIZ (S-TRIZ) in high-tech industries for system development, innovation, and production has increased, the application of AI technologies to specific TRIZ components remains unexplored. This systematic literature review is conducted to delve into the detailed integration of AI with TRIZ, particularly S-TRIZ. The results elucidate the current state of AI applications within TRIZ, identifying focal TRIZ components and areas requiring further study. Additionally, the study highlights the trending AI technologies in this context. This exploration serves as a foundational resource for researchers, developers, and inventors, providing valuable insights into the integration of AI technologies with TRIZ concepts. The study not only paves the way for the development and automation of S-TRIZ but also outlines limitations for future research, guiding the trajectory of advancements in this interdisciplinary field . Keywords
Data extraction and analysis Before delving into the extracted data, we present the acronyms used in S-TRIZ databases, as encountered during the reading of this paper, in Table 3 . The 57 journal articles cited in Table 4 were chosen to define the proposed research questions and analyze them in detail. The main information extracted from the selected publications was summarized to illustrate the bridge between the TRIZ tools and TM techniques. The retrieved information offers varied scope for both TRIZ tools and TM. Synthesis of practical applications In the pursuit of synthesizing the extensive body of knowledge encapsulated in 57 articles at the intersection of TRIZ (Theory of Inventive Problem Solving) and AI (artificial intelligence), a thematic practical analysis has been conducted to distill key insights and trends. This analysis, as presented in Table 5 of the associated paper, unveils five overarching themes that underscore the integration of TRIZ and AI, each encapsulating a unique facet of the amalgamation. From the development of automated technology intelligence systems and TRIZ trend identification to patent classification, knowledge extraction, and ontological approaches, these themes showcase the practical applications and outcomes arising from the synergy between TRIZ principles and advanced AI methodologies. This comprehensive thematic analysis not only serves as a compass for navigating the multifaceted landscape of TRIZ and AI integration but also provides a nuanced understanding of the practical implications witnessed across diverse realms of technology analysis and innovation . TRIZ tools TRIZ masters have defined a set of versatile tools for decades of systematic innovation development [ 4 , 5 , 84 ]. Most researchers believe that using TRIZ tools manually is time consuming, tedious and in many cases unintelligible because they are faced with a large amount of textual information often in the form of patents [ 22 , 39 , 85 ]. Consequently, to increase the precision and facilitate the utilization of TRIZ tools, integration with artificial intelligence (AI) techniques is essential, as shown in Table 4 . The application of AI in TRIZ is developed at three levels (Philosophy, Methodology and Tools), as depicted in the TRIZ pyramid in Fig. 1 . The classification of selected articles based on TRIZ levels showed that 62% of publications discussed TRIZ tools. The most popular tools, as illustrated in Fig. 6 , include function analysis, evolutionary trends, and component analysis, with 19%, 13%, and 7% of the total research, respectively. The fundamental philosophies that form TRIZ concepts include the seven main pillars of TRIZ; detailed explanations can be found in Ref. [ 7 ]. The fundamental pillars (ideality, resources, function value, contradiction, space-time-domain interface, system transfer, and system transition) facilitate ideation [ 7 ]. Recently, AI has accelerated the ideation of innovative design; in our study, 18% of the articles fall under this category. It is worth noting that TRIZ tools and methodologies are based on fundamental thinking philosophies [ 7 ]. TRIZ methodology seeking identical problem-solution pairs to solve a specific problem that may be applied in different technical fields [ 86 ]. Cavallucci and Strasbourg [ 87 ] developed an inventive design method (IDM) to expand the TRIZ body of knowledge. IDM aimed to identify initial problems; partial solutions link to possible “cause & effects” are shown in the representation of a problem graph [ 88 ]. As a pioneer in the utilization of AI in TRIZ methodology, Cavallucci, Rousselot [ 89 ] proposed a framework to extract patent knowledge and combine it with expert knowledge to construct inventive design ontology. Recently, ML techniques have been studied to assist in the development of IDM for A53 and A56. In A57, Berdyugina and Cavallucci [ 83 ] took one step ahead in automating the extraction of the key components of the IDM by applying NLP techniques and affinity propagation as ML algorithms. Recently, AI has accelerated the ideation for innovative design whereby in our research 18% of the articles fall under this category. It is worth noting that TRIZ tools or methodology are based on these fundamental thinking philosophies [ 7 ]. The TRIZ methodology seeks identical problem-solution pairs to solve a specific problem that may be applied in different technical fields [ 86 ]. Cavallucci and Strasbourg [ 87 ] developed an inventive design method (IDM) to expand the TRIZ body of knowledge. IDM aimed to identify initial problems; partial solutions link to possible “cause & effects” are shown in the representation of a problem graph [ 88 ]. As a pioneer in the utilization of AI in TRIZ methodology, Cavallucci, Rousselot [ 89 ] proposed a framework to extract patent knowledge and combine it with expert knowledge to construct an inventive design ontology. Recently, ML techniques have been studied to assist in the development of IDM for A53 and A56. In A57, Berdyugina and Cavallucci [ 83 ] took one step ahead in automating the extraction of the key components of the IDM by applying NLP techniques and affinity propagation as ML algorithms. TRIZ tools were developed to determine technical conflicts, innovation principles, and function analysis, and to recognize the evolution of systems. For instance, to design Smart Neck Helmets, 39 general engineering parameters were used to identify the design conflict, after determining the contradictions and finally selecting the proper innovation method in the innovation principles [ 90 ]. The computerization of TRIZ tools has been an attractive area of research, and this is portrayed in Fig. 6 , as 62% of the articles have been mentioned in this category. Data sources Data sources are critical in the use of TRIZ. As mentioned in the introduction, TRIZ itself was first formed by analyzing patent files. Nevertheless, applying TRIZ tools for any reason depends on the technical data. Traditionally, TRIZ practitioners have various difficulties searching for pertinent patents and extracting technical information for further analysis. This is why researchers have taken action to facilitate the process of patent analysis by employing the latest computer science technologies and making it as automatic as possible. In our investigation, we focused on two elements (document and database types). In terms of the document type, we identified five different document types in data analytics for S-TRIZ, as illustrated in Fig. 7 : (1) patent; (2) science, technology, and innovation (ST&I); (3) web-based; (4) lexical information; and (5) other types of documents, such as newsletters, industry publications, international patent office websites, and manufactures portfolios. Patent documents are the most common type of data used in S-TRIZ activities. Patent databases vary and almost each country has a specific local patent database [ 91 ]. USPTO and DII are two popular databases utilized in most S-TRIZ articles. Table 3 presents the list of acronyms used in Fig. 7 , which are related to the databases. Patent documents include structured and unstructured textual data, and S-TRIZ was applied to automate the classification process based on the International Patent Classification (IPC) and Cooperative Patent Classification (CPC) significantly [ 92 ]. Experts meticulously consider identifying TRIZ metrics within a patent, such as degree of ideality, level of invention (LoI), S-curve stages, trend of evolution, 40 inventive principles [ 93 ], and contradiction matrix [ 94 ] have been considered by experts meticulously. ST&I databases are progressively being considered in seeking newly emerging science & technology (NEST) innovation aspects for decision makers in R&D projects [ 47 , 95 ]. Therefore, quantitative approaches in line with text-mining techniques converge to retrieve functional information from ST&I documents using a tech-mining approach [ 47 , 96 , 97 ]. However, the semantic TRIZ methodology in terms of technology forecasting and system's evolutionary trend has been a part of the research within ST&I information [ 47 , 98 ]. Web-based resources provide rich information about design systems, demand, and other technical knowledge, such as Wikipedia [ 69 ], Internet technology trading platforms [ 70 ] and biological data [ 72 ]. This information can be retrieved using crawlers and scrapers for further analysis. Lexical information such as nouns, pronouns, verbs, and adjectives is gathered based on their definition in a popular dataset called WordNet [ 99 ]. WordNet has been used to generate ideas through morphological analyses [ 54 ]. Despite the aforementioned databases, the varied technical information resources provide a system's design details in the form of portfolios or industry publications. Portfolios are a collection of information about a system's design, which provide clear concepts for innovation. Industrial publications provide a broad spectrum of tech-centric outlooks in the form of magazines, websites, newspapers, etc. Pre-processing and feature representation NLP and machine learning (ML) are two dominant subcategories of artificial intelligence (AI) utilized in S-TRIZ widely [ 69 , 100 ]. NLP is a confluence of AI and linguistics that intelligently facilitates text analytics [ 34 ]. ML is a set of algorithms that enables the statistical solving and analysis of NLP problems by converting unstructured text into a structured format [ 34 ]. Therefore, the application of ML and NLP in the context of TRIZ is to automate the processes of understanding language related to the components of engineering systems in textual documents for problem solving and product innovation. Pre-processing of text documents as an initial step in NLP commonly involves converting text into a format that is measurable, quantifiable, and computable [ 45 ]. The most typical preprocessing techniques applied in S-TRIZ are segmentation or tokenization, removal of stop words, stemming, and lemmatization [ 30 , 40 , 71 ]. The software that facilitates the abovementioned techniques is Python NLTK, VantagePoint, VOSviewer, and WordNet for mapping and so forth. Utilizing a proper preprocessing technique is highly dependent on how noisy a document is and what the expected outcome is. Therefore, the use of these software differs for different projects. In most studies, preprocessing and morphological analysis are used interchangeably [ 54 , 97 ]. To analyze text documents, natural language processing techniques have been proficiently applied to extract technical features. Two linguistic techniques, syntactic (syntax) and semantic analysis, have assisted machine translation and information retrieval [ 97 ]. Syntactic analysis refers to the grammatical linguistic rules that lead to the well-known subject, action, and object (SAO) structure in S-TRIZ [ 65 ]. Part-of-speech (POS) tagging techniques are primarily used for syntactic analysis. Semantic analysis contributes to the logical meaning of words and sentences for computers in a manner that a human understands. Fig. 8 shows that 68% of the chosen articles in Table 4 attempted to process semantic analysis, as opposed to syntactic analysis. It also shows that 25% of the chosen articles applied both syntactic and semantic analyses in their studies. Feature selection in S-TRIZ has applications similar to those in other fields such as text mining and image processing. This process ameliorates the feature or term subset selection with the highest discriminative rate and lowest dimensionality [ 101 ]. Feature selection methods are primarily used in text classification to improve accuracy, reduce dimensionality, and alleviate irrelevant data [ 101 ]. The diversity and importance of feature selection methods, including strategies, approaches, types of targets, and labelled data dependency, have been reviewed in detail by Ref. [ 101 ]. However, there are two common types of feature selection, namely, SAO structure and keyword-based, which have been identified within the selected articles. Fig. 9 shows that 64% of them were keyword-based, and 32% of articles attempted to extract SAO in their studies. Mann [ 35 ] proposed a keyword-based analysis to assess the current value of patents by identifying strength factors and SAO analysis in estimating future value by investigating function words. There are also several different techniques for selecting either keywords or SAO, depending on how informative they are. Word embedding is a way to represent text as a numerical vector for unique word selection [ 19 , 101 ]. The vector space model (VSM) is a Word2Vec model that represents a document as an array of numbers (vectors), which is based on a similarity score between vectors and calculated by a cosine similarity score [ 48 , 49 ]. The most elementary technique for text vectorization is bag of words (BoW). This model creates vocabulary from all dissimilar words in the corpus and then marks their occurrence as a table of 0 and1 for each sentence [ 102 ]. However, BoW has limitations such as the size of vocabulary, complexity in the computation of sparse representations, and neglecting the meaning of words [ 103 ]. Therefore, Word2Vec demonstrates two models: (1) continuous bag of words (CBOW) by predicting the current target word according to the source context words, and (2) skip-gram as an unsupervised model predicting the most related words for a current word [ 19 , 73 ]. Another common text vectorization technique is term frequency-inverse document frequency (TF-IDF), which statistically measures the relevance of a word to a document [ 52 ]. In fact, two different metrics are multiplied to obtain the weight of words in a document [ 31 , 67 ]. The first metric is “term frequency,” which implies the importance of a word within a document by counting the frequency of a word occurrence. The other metric is “inverse document frequency,” which implies that the measure of a word is common or uncommon within a document. It is a logarithmic formula which results in a rate between 0 and 1 [ 31 ]. The results near 0 indicate a common word; conversely, if the results are close to 1, it implies an uncommon word [ 57 , 58 , 73 ]. VSM models possess limitations from the aspect of inspection of documents owing to the dimensionality and sparsity whereby numerous features are reflected with zero values [ 97 ]. One measure to address these limitations is the application of principal component analysis (PCA), which allows dimensionality reduction. This is achieved by converting high dimensionality of vectors to a minimum value in case of sparsity [ 60 , 61 ]. Feature selection based on the SAO structure is a type of document representation in which features are selected syntactically, followed by the indication of subject (noun), action (verb), and object (noun) [ 44 ]. Indeed, the SAO structure refers to the TRIZ technical concept and can provide more information than keyword analysis [ 42 , 44 ]. Keyword-based analysis typically focuses on system components that cause the verbs of phrases that imply the function of the system that has been neglected, and consequently, the relationships between components remain intact [ 62 ]. In contrast, SAO structure enables scholars to seek core technological aspects creatively [ 62 ]. The application of the SAO structure in the selected articles is presented in Table 6 . It shows how a technical phrase is interpreted in a text document, what type of information can be extracted, and the type of knowledge obtained after analysis. Generally, the subject and object in a sentence refer to the components or subcomponents of a system, and the action refers to the function and relationship between components. Although SAO can be applied in various fields of technology and provides fruitful information, further contribution to extracting more solid technical knowledge is necessary. For instance, SAO only focuses on three elements of a sentence, while the rest of the sentence may reveal more details about a system such as purpose, effect, and field, which are often not captured efficiently [ 71 ]. SAOs are also unable to identify which components are important in a system [ 19 ] or which components belong to the supersystem and main system. Additionally, “term clumping” method by taking advantage of NLP techniques has been utilized in cleaning and clustering large collections of technical text documents such as patents to obtain information and knowledge in a specific technical domain [ 61 , 62 , 71 ]. It integrates numerous NLP techniques such as removing stop words and constructing synonym lists, fuzzy set matching, TF-IDF, and PCA [ 47 , 52 , 56 , 57 , 60 , 61 ]. Data mining and pattern discovery with ML algorithms The end goal of S-TRIZ is to automate the manual processes of analyzing, simplifying, and visualizing various TRIZ methods to demonstrate the characteristics of a system in depth. The different types of text analysis procedures in Fig. 10 explain the diversity of studies that were identified during the SLR. Whether information management is based on TRIZ or otherwise is still an area of debate among researchers as it is highly dependent on their subject matter experts. For instance, Verhaegen [ 104 ] believed that, notwithstanding TRIZ being categorized as design-by-analogy, novice practitioners face difficulties in interpreting information by analogy. Therefore, a method for automating the identification process is required. Product aspects for design-by-analogy without considering TRIZ methods have been proposed [ 104 ]. However, the aforementioned study introduced a general definition for problem-solving concepts in TRIZ without considering the various tools and fundamental innovative definitions within TRIZ theory. In fact, there are number of studies that focus on automation of technical document analysis without consideration of TRIZ concepts such as identification of core technologies from patents related to fuel cell vehicle [ 105 ], pure research on evaluation of main factors for selecting keywords for patent analysis [ 106 ], development of topic modelling framework for ST&I analysis and prediction in context of big data [ 17 ], clustering patents over time known as patent lane to identify similarity patterns among patents [ 107 ], applied generative topographic mapping method with keyword vectors to identify promising technology opportunities [ 108 ], semantic patent analysis applied to detect emerging technologies in the field of camera technology management [ 109 ], discovering a type of patent with novel innovation opportunities in the case of Telehealth by using NLP techniques [ 110 ], a combination of two approaches namely key-graph based and index-based validation to recognize promising technological innovation [ 111 ], clustering and identifying potential opportunities between scientific and technological fields experimented in smart health monitoring [ 112 ], quantitative analysis using text mining to detect patent infringement automatically for Nintendo [ 113 ], a novel method to quantitatively assess the significance of function score in the area of technology in a determined trend based on genome sequencing [ 114 ], R&D project development improvement in China's construction industry through the cross-domain function and its semantical trend analysis [ 85 ], and so forth. The major reasons why the aforementioned papers omitted the usage of TRIZ were the claim that it was rigid, difficult to comprehend, had a limited scope of problem-solving, and demanded expert interventions [ 85 , 104 , 115 ]. Nevertheless, these claims are debatable as the chosen 57 papers in Table 4 have successfully applied TRIZ fundamentals, and the principles of TRIZ may be modified to suit different engineering system requirements. In addition, TRIZ provides a vivid innovation roadmap and detailed problem-solving methods that can be utilized by both seasoned practitioners and beginners [ 79 , [116] , [117] , [118] ]. In this section, knowledge discovery from text (KDT) [ 31 ] algorithms and techniques include data mining and pattern discovery with ML applied in S-TRIZ, covering 57 chosen papers. Table 7 categorizes KDT algorithms and techniques for deep learning, supervised learning, and unsupervised learning. Deep learning is a type of supervised ML that leverages neural-network algorithms to train large datasets. The ANNs functions were simulated from a human brain with multiple layers of interconnected neuron webs. Deep learning enriches NLP tasks by creating the patterns to extract and classify the technical features. Supervised learning algorithms are applied to labelled datasets to classify the words extracted from textual documents. Texts were labelled using tags or annotations for further classification. For instance, we can determine the subject, verb, and adverb over whole sentences using POS tagging to extract SAOs. Subsequently, similar SAOs are classified by training their lexical tags using supervised algorithms. Supervised classification is either a classification that assigns test datasets into predefined categories accurately or a regression that understands the relationship between dependent (response variable) and independent (predictor) variables. The classification of patent documents based on IPC, metadata, and bibliographic information is a challenging area for data scientists [ 92 ]. Unsupervised learning is applied to datasets that are not assigned to labels or classes. Clustering algorithms are used for unlabeled texts or documents to group them into similar sets depending on their relevance. Table 8 categorizes the KDT algorithms for word embedding, collaborative filtering, dimensionality scaling, network modelling, and topic modelling. Word embedding is the most critical procedure in KDT because of the importance of translating human language into machine language. The outcome of word embedding can be used as an input for the ML algorithms listed above. Collaborative filtering encompasses recommendation techniques, such as co-occurrence, which is applied in most studies. Co-occurrence verifies the frequency of the two determined words appearing together in textual documents. The distribution of words in documents represents dimensionality. Unnecessary words may lead to noise during the analysis process, particularly in high-dimensional textual datasets. Dimensionality reduction is a common technique used to increase the quality of statistical analysis. The application of the above-mentioned algorithms and techniques faces difficulties in removing dimensionality without a negative impact on the end results. Moreover, multidimensional scaling (MDS) is a statistical model that aims to reduce the complexity of high-dimensional datasets from the aspect of similarity measurements. MDS is beneficial for discovering technology or components similar to the experimental TRIZ tools. Network modelling or text graphs are visual representations of the synergy or relationship between the extracted keywords. A graph is constructed of nodes, which are terms and edges that represent the relationship between nodes. Visualizing information within textual documents is a trending scope among keywords and SAOs to discover new knowledge. Topic modelling is a well-known unsupervised ML algorithm that tries to discover abstract “topics” by clustering words automatically. In linguistics, morphology refers to the grammatical construction of words and sentences. WordNet is a widely used lexical dictionary. Exploiting linguistic techniques such as semantic relationships (meronym/holonym) or (hypernym/hyponym) are areas that researchers have used to automatically construct technical morphology. The Apriori algorithm applies prior knowledge to identify the frequency of the determined words in a dataset for the Boolean association rule. However, Apriori is not recommended because it demands high-capacity memory, and its performance is low and inefficient when using large amounts of data. Fuzzy matching techniques provide further training to identify the similarity between two words, strings, or text entries. For example, fuzzy matching is effective in identifying the extent to which two engineering components or technologies are approximately similar. Evolutionary algorithms, such as genetic algorithms, are applied to the optimization problem based on a heuristic search. A genetic operation tree (GOT) was applied to construct an operation tree (OT) based on SAOs and then translated into a GA genotype as a self-evolutionary model for the automated generation of innovative technology [ 41 ]. However, evolutionary algorithms have yet to be developed for comprehensive text analysis, which reflects the requirement for more thorough research. Interpretation and evaluation of S-TRIZ In S-TRIZ-based research, performance measurements and indicators to evaluate algorithms are very diverse. The various evaluation performances identified in this paper were classified into ten different categories as illustrated in Fig. 11 . TRIZ metrics predefined by Altshuller for assessing technology maturity on an S-curve using indicators such as profitability and cost reduction of products, patents that utilize specific technology, or measuring the degree of novelty [ 31 , 32 ]. For further improvement of products, TRIZ evolution trends adopted valuable criteria to evaluate potential technologies in patents [ 8 , 45 ]. TRIZ metrics are commonly visualized on a radar plot to depict the status of technologies before analyzing further with experts. Evaluating text classification can be conducted either with schemes that are technology driven [ 70 ] such as IPC and united states patent classification (UPC) codes, or TRIZ-based schemes which classify patents based on the Contradictions Matrix and the Inventive Principles [ 52 ]. There are two main challenges in evaluating text classification: (1) the lack of protocols and standards for collecting data, and (2) the inability to distinguish various performance measures in multiple experiments [ 102 ]. The common indicators for classification assessments are accuracy [ 58 ], recall, precision, and F-measure [ 57 ] that are measured by using confusion matrix [ 102 ]. The accuracy of a regression model can be attained if the target value for to determine features that have low error. The metrics of accuracy, precision, recall, and F-measure, commonly employed for classification assessment, can be computed as elucidated by Ref. [ 119 ]. The four abovementioned indicators in some cases can also be used in clustering assessments in the way Berdyugina and Cavallucci [ 83 ] computed statistic measures for contradiction identification versus human extraction. Indexes such as correlation coefficient R 2 have been used to measure the level of the invention [ 30 , 39 ]. Several other correlation coefficient measurements including mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE) and relative absolute error (RSE) can be used in linear regression [ 120 ]. Statistical analysis can be used to assess indicator performance, namely by using t -test and correlation analysis [ 48 ]. The former aims to make compare the mean value as a way to distinguish various datasets. On the other hand, correlation analysis measures the linkage of predicted and actual values. For instance, a t -test can be used to assess the average novelty of design ideation in an experiment on a student's mind [ 48 ]. In another experiment, after experts annotated the data, Cohen's kappa coefficient was applied to measure inter-rater reliability as statistic performance [ 65 ]. The performance of clustering results for papers and patent corpuses are evaluated by determining cluster sparsity coefficients presented by ORCLUS [ 49 ]. Network-based evaluation represents a quantitative relationship among TRIZ technical elements such as functions, technologies, products, etc [ 69 ]. The network constitutes of elements such as structural nodes and relationships between elements that connect the links [ 62 ]. There are various types of network analysis such as thesaurus network [ 37 ], citation network, function–behavior–state network that refers to components of a device [ 43 ], and patent networks which are based on semantic similarity amongst them [ 44 ], innovation networks which represent the technological similarity between problems and solutions [ 61 ], SAO networks which identify relationships between subjects (noun), actions (verb) and objects (noun) to discover technical relationships between them [ 42 , 61 , 62 , 67 ]. Nevertheless, the indicators for assessing these various types of networks are (1) centrality (closeness-centrality), (2) density, (3) cohesion index and (4) structural holes. Recently [ 121 ], proposed the inverse problem graph (IPG) method in which five types of problems were predefined from the initial analysis of the inventive design. This method was inspired by inventive design method (IDM). IDM framework is a complementary of TRIZ knowledge which applies Pugh's theory or graph theory [ 87 ]. Technology Roadmapping (TRM) is a graphical and visual tool that shows industrial information such as materials, products, technologies, components, and so on over time [ 50 , 56 ]. TRM construction can be expert-based, computer-based and hybrid-based [ 50 ] which are known as the qualitative, quantitative and hybrid (term/topic-based, P&S pattern-based, fuzzy set-based) method respectively [ 47 , 56 ]. Additionally, TRM was extended by Wang [ 46 ] to visualize recursive object model (ROM) and function–behavior–state (FBS) diagram in two-dimensional maps. Patent mapping was? presented by Ref. [ 70 ] to select promising topics concerning elements/fields and purposes/effects. A tree model based on TRIZ is another type of mapping that serves to construct concept design [ 72 ]. Evaluation of TRM-based methods conducted by experts that define specific indicators regarding to case study development. A web-based interface is designed to verify the feasibility of a TRIZ tool called function-oriented searching patent by conducting case studies [ 38 ]. User interface prototyped by Yoon [ 51 ] to assist? system administrators in discovering function-based technology opportunities based on current technological capability. A graphic user interface was developed to indicate the applicability and validity of wordnet-based morphology for ideation [ 54 ]. For further R&D evaluation, technology domain experts should examine the validity of interface systems by conducting case studies. There are some other evaluation techniques which do not belong to any of the above groups and require technical assessments. For instance, for TRIZ-based innovation evaluation, Yu [ 41 ] suggested domain experts should evaluate functionality, constructability, and cost effectiveness in the first step and then conduct assessment of real-world application performance. In another experiment, to quantify the potential value of product opportunities, some indicators such as confidence in association rules and the importance of conditional/consequent products presented based on firm's internal capabilities for each product [ 55 ]. Novel evaluation indicators are suggested to measure technological feasibility which include (1) magnitude index as a quantitative indicator, (2) importance index as a quantitative indicator, and (3) growth trend index as a qualitative indicator [ 60 ]. Kang [ 63 ] conducted an actual case study to evaluate market sales data and functional descriptions. In a different study, ISO 9241-11 standard (effectiveness and efficiency) as a quantitative method was used to measure the performance of TRIZ-based inventive problem solving [ 68 ]. On the other hand, the feasibility of generating? ideas through morphological matrix on unified structured inventive thinking (simplified TRIZ) to be evaluated further with expert's knowledge [ 73 ]. Graph based clustering method known as spectral clustering that applies eigengap heuristic algorithm to calculate the optimal number of groups k has been used to evaluate the accuracy of patent clustering based on SAO vectors [ 19 ]. Recently, to evaluate candidate terms extracted from patents, unit-hood which implies the degree of strength or stability of syntax combinations, and term-hood which refers to how probable the word that calculated as the C-value [ 74 ]. Finally, the quantitative outcome-driven innovation (ODI) method is capable of evaluating the importance and satisfaction of technology opportunity [ 22 ]. Additional information No additional information is available for this paper. CRediT authorship contribution statement Mostafa Ghane: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing. Mei Choo Ang: Formal analysis, Funding acquisition, Methodology, Project administration, Supervision, Validation. Denis Cavallucci: Conceptualization, Methodology, Resources, Supervision, Validation, Visualization. Rabiah Abdul Kadir: Funding acquisition, Project administration, Supervision, Validation. Kok Weng Ng: Conceptualization, Methodology, Supervision, Validation. Shahryar Sorooshian: Funding acquisition, Methodology, Project administration, Resources, Validation, Visualization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements This work thanks the research grant by the Ministry of Higher Education, Malaysia , through research grants FRGS/1/2018/TK03/UKM/ 02/6.
CC BY
no
2024-01-16 23:43:51
Heliyon. 2023 Dec 19; 10(1):e23775
oa_package/2c/91/PMC10788813.tar.gz
PMC10788817
0
INTRODUCTION The use of medicines is one of the mainstays of management in the care of people living with dementia (PLWD). 1 Numerous studies have been conducted to explore the perspectives of healthcare professionals (HCPs, e.g. pharmacists, general practitioners [GPs], nurses) about prescribing medicines for PLWD and the challenges this population faces when managing their medicines. 2 , 3 , 4 However, only a small number of studies have set out to explore the perspectives of PLWD and their carers about medication use and prescribing and how they can be supported in using medicines. A recent systematic review was conducted to identify the impact of interventions at hospital discharge to guide carers in medication management for PLWD. 5 The review identified only five studies and emphasised the need for well‐designed interventions to be developed to aid and guide carers with medication management for PLWD. 5 Cross et al., 6 in a qualitative study (involving carers, PLWD, GPs, nurses and pharmacists), reported that participants agreed that carers had an important role to play in medication management for PLWD, acting as advocates, facilitating communication between HCPs and PLWD, helping with decision‐making and providing increasing assistance with medication administration as dementia progressed. 6 All participants agreed on the importance of involving PLWD in medication management and decision‐making around medicines and that the role of carers was fundamental in medication management for PLWD. 6 One category of medicines frequently prescribed in PLWD and which has given rise to concern is those with anticholinergic activity. Indeed, several hundred medicines may exhibit anticholinergic effects, which include drowsiness, blurred vision, dry mouth, confusion and hallucinations. 7 However, a number of anticholinergic medicines are used to manage conditions such as urinary incontinence and prevention of blood clotting. 7 Other anticholinergics are often prescribed to manage the noncognitive symptoms of dementia, such as aggression, agitation, wandering and mood swings. In the case of the latter symptoms, benzodiazepines (e.g., diazepam), antipsychotics (e.g., risperidone) and antidepressants (e.g., amitriptyline) 7 may be used. However, there is growing evidence that their use may be associated with an increased risk of incident dementia. 8 , 9 A recently published observational study highlighted that higher anticholinergic burden (ACB—the cumulative effect of using multiple medications with anticholinergic properties concomitantly) was associated with significantly higher mortality rates in PLWD in comparison to PLWD who had no ACB. 10 A systematic review found no eligible studies that aimed to reduce ACB among PLWD in primary care. 11 This was an unexpected finding as it is recognised that interventions are needed to reduce ACB and the use of medicines with anticholinergic activity in this population without affecting the management of other conditions for which these medicines are prescribed. 12 There is very limited literature describing the experiences and perspectives of PLWD and their carers about the use of anticholinergic medications in PLWD; a greater appreciation of these perspectives may help researchers and clinicians to better understand PLWD and carers' concerns about the use of these medications and facilitate their more judicious use by clinicians and prescribers. Online discussion fora are increasingly being used by researchers as they represent a rich source of data pertaining to patient and carer experiences and can often provide additional perspectives that would not be accessible by more conventional qualitative methods. 13 Such fora have been utilised in research studies, including dementia and other long‐term conditions such as Parkinson's disease and stroke 14 , 15 , 16 and are increasingly accepted as a source of qualitative data. 17 , 18 , 19 The study outlined in this paper aimed to address this gap in the evidence base by analysing data from an online dementia discussion forum to explore the experiences and perspectives of PLWD and their carers about the use of anticholinergic medicines in this population.
METHODS Setting This study involved analysing archived discussions on Dementia Talking Point, a fully public online community for anyone affected by dementia, created and maintained by the Alzheimer's Society in the United Kingdom ( https://forum.alzheimers.org.uk/ ). It contains fora, areas where discussions take place on different topics; within these, members can create a ‘thread’ (a group of posts identified by a title containing an opening or original post that opens the dialogue of discussion). 14 The threads make it easier for users to find posts on a particular topic, such as people who are at a similar stage of dementia or in a similar situation. The threads can contain any number of posts, including multiple posts from the same members. 14 Only those who are registered as members of Talking Point can create new threads, edit posts and receive notifications of replies. However, threads, posts and archived discussions can be viewed by nonmember visitors to the forum. 14 According to the Talking Point website, there are currently 81,713 members, 130,483 threads and 1,903,747 posts. 20 Since all members are required to state their reason for joining when registering to use Talking Point, it was assumed that they all had some experience with, or connection to, a person living with dementia. Data selection The researcher (B. S.) searched the archived Talking Point threads and posts to extract data for analysis; he did not create posts or contact any members of the forum. Threads from the date of inception (2005) of the Talking Point forum to the search date (January 2022) were searched by using keywords (search terms) within the advanced search facility provided by the forum. The search terms were informed by the literature, particularly three observational studies conducted in the United Kingdom, 8 , 10 , 21 all of which reported commonly used anticholinergic medications amongst people with dementia. The search terms were discussed and agreed upon by the research team. The following terms, and combinations thereof, were used: ‘anticholinergic’, ‘antimuscarinic’, ‘antipsychotic”, ‘urological anticholinergic’, ‘oxybutynin’, ‘tolterodine’, ‘solifenacin’, ‘antidepressant’, ‘amitriptyline’, ‘diazepam’, ‘risperidone’, ‘paroxetine’, ‘dosulepin/dothiepin’, “quetiapine’, ‘isosorbide preparations’, ‘warfarin’. Examples of anticholinergic medicines and their main clinical indications are shown in Table 1 . Data extraction and screening All posts, including any of the search terms, were copied verbatim and transferred to Microsoft Word. The anonymity of forum members was assured by assigning a unique identifying code (e.g., TP001, where TP indicated ‘Talking Point’ and the number indicated the order in which posts were stored). Duplicate posts were removed. Posts were assessed for relevance to the study objectives by two researchers working independently (B. S. and H. E. B.). Irrelevant posts were removed, and reasons for exclusion were recorded. A third researcher (C. M. H.) was consulted when consensus could not be reached about the inclusion of a post. Data analysis Inductive thematic analysis was conducted by hand, using methods described by Braun and Clarke. 23 Posts were coded to meet the study aims by identifying members' experiences of using anticholinergic drugs, their perspectives about reducing the use of anticholinergic drugs and their understanding of the risks involved with the use of anticholinergic drugs in PLWD. Coding was undertaken by B. S. with independent coding of a subsample (20%) of posts undertaken by H. E. B. Posts were categorised as being one of the following: unique posts, similar posts (i.e., posts in which the same story was repeated but not entirely identical wording was used) or similar posts with additional codes (i.e., posts in which the same story was repeated, not entirely identical wording was used but also contained additional details which generated additional codes). Coding was discussed amongst the research team until agreement was reached on the coding frame. These codes were aggregated into broader themes and then discussed and agreed upon by the research team. Illustrative quotations were used to support interpretations. Quotes were edited when needed to improve readability; where text has been added or clarification provided, this has been placed within square parentheses []. Ethical approval was granted for this study by the Faculty of Medicine, Health and Life Sciences Research Ethics Committee, QUB on 4 January 2022 (Reference MHLS 21_160), and permission to use the forum data was granted by the Talking Point Manager. To enhance the reporting of this study, the COnsolidated criteria for REporting Qualitative studies checklist was used (COREQ) 24 (see File 1 ).
RESULTS Following the completion of the searches, a total of 1580 posts written by 625 forum users were extracted. Following the removal of duplicate posts, 587 posts were reviewed, and a total of 550 posts from 341 forum users were included for analysis. Among these, there were 541 unique posts and 46 similar posts. Of these similar posts, nine posts contained additional details, which generated additional codes. Therefore, these nine similar posts and the 541 unique posts were analysed (i.e., n = 550 in total). A flowchart of the process of data screening and selection is displayed in Figure 1 . An initial review of the posts indicated that they were provided exclusively by carers, and none could be identified as coming from PLWD. Therefore, the findings presented relate only to carers' perspectives. Thematic analysis of the carers' posts revealed their experiences of the use of anticholinergic medications in PLWD. The themes that encompassed the experiences were as follows: (1) motivators of prescribing, (2) perspectives on the process of prescribing and (3) the outcomes of prescribing. The dominant motivator of prescribing was the management of noncognitive symptoms, pre‐ and postdiagnosis of dementia. The process of prescribing was informed by the assessment of the risk‐benefit of starting a medication and shared decision‐making between the carer and HCP to a greater or lesser degree. The outcomes of prescribing were observing the effects of the medicines, which in turn influenced whether prescribing was reviewed and continued unchanged, continued but amended or reinitiated if the medicine had been previously stopped or discontinued (the process of deprescribing). Figure 2 summarises the broad themes that encompassed the experiences of forum users, which are explained in further detail below. Motivators of prescribing Carers reported that the presentation of noncognitive symptoms such as aggression, agitation, wandering, changes to sleep patterns and mood disturbances in PLWD appeared to be the main motivator for prescribing anticholinergic medications: Indeed, some forum users reported that anticholinergic medications were used by PLWD before they received a dementia diagnosis in response to the presentation of noncognitive symptoms while observing that there had been a decline in cognition since starting the anticholinergic medications: From the perspective of carers, the prescribing process after that was influenced by an assessment of the risk‐benefit of introducing medications, the need for trial and error and the approach taken to decision‐making between the PLWD, carer and prescriber. Perspectives on the process of prescribing Many information sources were reported to be used by forum users as they tried to understand the benefits and risks involved with using these medications if HCPs indicated that such medications might be required. Some were evidence‐based (e.g., clinical guidelines published by the National Institute for Health and Care Excellence 12 ), but many were not, such as health websites and news stories that were not supported by robust evidence: In addition, many users asked fellow forum users for help to understand their views and experiences and the risks and benefits regarding the use of anticholinergic medications: It was also recognised that trial and error would be part of the prescribing process. Forum users reported that based on the effect of these medications, prescribers often had to adjust the frequency and dosing of anticholinergic medications, which forum users described as a process of trial and error: The knowledge of risks and benefits on the part of carers contributed to their understanding of decisions made by prescribers: The extent of carer involvement in decision‐making was variable. As advocates, carers were keen to be a part of the decision‐making process since they knew the history of the PLWD and their medication history and could observe immediate and long‐term changes in a person's behaviour and symptoms. However, some carers described feeling excluded by HCPs from decision‐making about prescribing: However, some carers involved in decision‐making reported that they felt stressed about making the wrong decisions in case this resulted in PLWD experiencing negative effects from initiated medications: Outcomes of prescribing Following the initiation of prescribing of anticholinergic medicines, forum users described observing a variety of effects on PLWD: Many carers described the negative effects of medications through the impact on their own levels of anxiety and quality of life, from having to deal with challenging behaviours and noncognitive symptoms, such as aggression: Some of the forum users reported that newly introduced anticholinergic medications were ineffective in managing noncognitive symptoms. Forum users described feeling frustrated when this happened: Other forum users described the positive effects as ‘life‐changing’ for both carers and PLWD. Many forum users observed a major improvement in noncognitive symptoms after the introduction of an anticholinergic medication: Forum users acknowledged that anticholinergic medications would affect individuals differently and that this was important to consider when assessing a person's response to medication: The presentation of side effects led to a review of medication, which could result in a medication being continued, a change (increase or decrease) in dosing, a medication that had previously been stopped being reintroduced or deprescribing (withdrawal/stopping a medication): Other forum users noted the disappearance of negative effects after reducing the dose of an anticholinergic medication: Some users observed the reappearance of noncognitive symptoms when the dose of an anticholinergic medication was reduced, and in some cases, the medication was reintroduced: Deprescribing (withdrawal/stopping medications as described by forum users) was initiated by different prescribers (such as secondary care consultants and GPs) and sometimes carers when they reported a negative effect from anticholinergic medications: Some users also feared that following the deprescribing of anticholinergic medications, other comorbidities may become worse:
DISCUSSION To our knowledge, this is the first study that has analysed data from an online discussion forum to understand carers' experiences without the use of anticholinergic drugs in PLWD. A total of 550 posts were reviewed and analysed thematically. This study showed that anticholinergic medications were being prescribed before a person receiving a dementia diagnosis. Forum users described assessing the risks and benefits associated with individual medications and the extent of their involvement in decision‐making. Finally, the outcomes of prescribing were reported on the basis of the effects of the medications and decisions being made whether to continue, change or discontinue medications. In this study, the Talking Point forum proved to be a valuable source of data, 14 , 25 although it was evident that carers were exclusively represented in the posts extracted for this study. Rich accounts were collected from a large sample of ‘participants’ (341 unique forum users); the use of this method of data collection circumvented traditional approaches to sampling and recruitment normally followed in other qualitative study designs, which can be challenging in this population, and which would have been further complicated by the ongoing coronavirus disease 2019 pandemic at the time of this study. The data were collected in the absence of the researcher, which maintained the integrity of the data and anonymity of the participants, removed participant bias towards the research agenda and reduced the degree of intrusiveness. 13 , 26 However, due to the lack of interaction between the researcher and the participants (forum users), points that were unclear could not be clarified and the sociodemographic characteristics of study participants are unknown. In addition, forum users' posts were not always directed to the research question; therefore, the researcher spent a significant amount of time screening posts to ensure only relevant ones were included. 13 This study found that anticholinergic drugs were commonly used by PLWD before they had received a dementia diagnosis, which is consistent with other studies reporting that older people had a high prevalence of anticholinergic drug use. 8 , 27 , 28 , 29 Forum users described that noncognitive symptoms were frequently observed in PLWD, which probably accounts for the prescribing of these medications and which reflects other published research. 30 , 31 , 32 However, this contradicts clinical guidance and indicators of prescribing appropriateness, which recommend that anticholinergic medications should not be used in older people or those with dementia due to the increased risk of cognitive decline. 33 , 34 It was clear that Talking Point was considered a valuable source of information and support by forum users. However, there appeared to be no moderation of posts to evaluate the quality of the information provided by forum users or to remove irrelevant/incorrect information. Other studies have described the use of information sources, such as online fora, by patients and their carers when looking for answers to questions about their medical conditions. 35 , 36 However, this may suggest that carers are not being provided with sufficient information from HCPs regarding the availability of pharmacological and nonpharmacological management options and local services and support to which they could be signposted, 12 leading them to look elsewhere for advice. Prescribing often happens after the prescriber has considered the potential risks and benefits of these medications for the recipient. A key part of this assessment is considering the evidence base. However, based on the content of the forum discussions, it was not clear if HCPs adhered to the evidence. For example, the NICE guideline, which covers the management of dementia, recommends minimising the use of medicines, such as antipsychotics and antidepressants, associated with increased ACB in those with suspected and confirmed dementia and recommends that prescribers use alternative medications. 12 Uptake and implementation of clinical guidelines is acknowledged to be challenging in clinical settings due to a lack of time, resources, and implementation support guidance, as well as difficulties in managing PLWD/carers'/family members' expectations. 37 , 38 , 39 Prescribing decisions also appeared to be influenced by input from carers, but the extent of this involvement was variable. Shared decision‐making is acknowledged to be a crucial aspect of person‐centred dementia care. 40 , 41 Increasingly, family members and carers may become more involved with and influence decision‐making, although this is acknowledged to be both difficult and stressful for surrogate decision‐makers. 40 , 42 , 43 These feelings of stress and burden associated with decision‐making were evident in forum users' posts, with many describing they felt pressure to make the right decisions or guilt over making the wrong ones. PLWD and carers' own knowledge and perceptions surrounding the use of (and risks associated with) anticholinergic medications also appeared to influence their ability to contribute to shared decision‐making, and it would be prudent for clinicians to ensure that PLWD and their carers/family members are adequately informed during the decision‐making process and involved from an early stage so that their views and opinions can be fully discussed and considered as part of advance care planning. 12 , 44 , 45 The outcomes from prescribing focused on the effects of the medications and further decision‐making on their prescribing. Forum users described a range of different effects on the PLWD following the use of anticholinergic medications, which varied from positive to negative. These accounts only represent these forum users' perspectives and may not be generalisable to the wider population of PLWD, as the effect of a medication may vary from person to person. While the negative effects of anticholinergic medications are acknowledged within the literature, there may be scenarios in which these medications are appropriately indicated and have a beneficial effect; such experiences were also described by forum users. This study also described many negative effects on carers, which were caused by the presence of noncognitive symptoms and the effect of anticholinergic medications on PLWD. This is consistent with many studies reporting the negative effects of caring on the carers of PLWD. 46 , 47 Teahan et al. 48 reported that family carers of PLWD experienced additional challenges due to the stigma associated with a person receiving a dementia diagnosis and dealing with noncognitive symptoms. 48 To reduce the carer burden, NICE recommends providing psychoeducation and skills training interventions to carers of PLWD, which includes the provision of advice on how to look after their own physical and mental health, their emotional and spiritual wellbeing and training to help them provide care, such as how to understand and respond to changes in behaviour. 12 Strengthening the provision of carer support services could reduce the burden and stress among carers of PLWD. 49 The effects observed with the medication led to further consideration about whether to continue, change or discontinue the medication. Findings from a recent trial reported that implementing medication reviews in routine care could achieve long‐term benefits by increasing the continuity of care for this population. 50 During the review process, it is crucial to pay attention to the presence of potentially inappropriate medications for PLWD, such as anticholinergic medications. 33 , 34 , 51 When deprescribing of anticholinergic medications took place, this was often reported to be instigated by carers/PLWD reducing the dose or withdrawing the medication themselves. This may happen after experiencing an adverse event if the PLWD or carer is feeling confused or distressed or has concerns about the long‐term effects of a medication or if one believes the issue for which the medication was originally prescribed has been resolved completely. 20 , 52 , 53 However, it would be preferable and good practice for HCPs to oversee the deprescribing, withdrawal or dose reduction of anticholinergic medications in this population so that this can be done in a safe manner. 12 , 33 , 34 There is evidence that reducing the use of anticholinergic medications can reduce carer burden and reduce the frequency, severity and disruptiveness of moderate‐intensity noncognitive symptoms in PLWD. 54 , 55 Strengths and limitations This study utilised a novel method of data collection, and the findings have added to a limited evidence base on carer experiences and perspectives of the use of anticholinergic medications in PLWD. The searches were comprehensive and designed to identify as many relevant posts as possible by using search terms informed by the literature. 8 , 10 , 21 Despite this, it is possible that some potentially relevant posts may not have been identified due to spelling or typographical errors made by forum users, particularly with medication names. Due to limitations with the search facility available on the Talking Point website, advanced search strategies (e.g., truncation, Boolean operators) could not be used, which may have helped to make the search more focused. All posts appeared to be reported by carers, so the perspective of PLWD is absent. And the findings only represent the experiences of those carers who engaged with Talking Point and may not be generalisable to the wider carer population.
CONCLUSION This study has provided unique insights into carers' experiences and perspectives about the use of anticholinergic medications in PLWD. The findings have highlighted how commonly these medications are prescribed for PLWD and carers' concerns about their use. There is a clear need for the provision of information about these medications for carers and, indeed, PLWD. Further work is also needed to explore the views and experiences of relevant HCPs so that greater understanding can be sought of how they can contribute to reducing ACB in this population.
Abstract Introduction There is concern about the use of anticholinergic medications in people living with dementia (PLWD). Such medicines may increase cognitive decline and may be associated with higher mortality in PLWD who take these medicines. The aim of this study was to analyse data from an online dementia discussion forum to explore the experiences and perspectives of PLWD and carers about the use of anticholinergic medicines in this population. Methods Following receipt of ethical approval, archived discussions (posts) from Dementia Talking Point, a fully public online forum for anyone affected by dementia, created and maintained by the Alzheimer's Society, were searched from the date of inception to January 2022 using a range of search terms including commonly used anticholinergic medicines. Posts, including any of the search terms, were assessed for relevance and analysed using inductive thematic analysis. Results Five hundred and fifty unique posts were analysed, all of which had been provided by carers, with no posts attributed to PLWD. The themes that encompassed carers' experiences were (1) motivators of prescribing, (2) perspectives on the process of prescribing and (3) the outcomes of prescribing. The dominant motivator of prescribing was the management of noncognitive symptoms, pre‐ and postdiagnosis of dementia. Carers' perspectives on the process of prescribing were informed by an assessment of the risk‐benefit of starting a medication and shared decision‐making between the carer and healthcare professional to a greater or lesser degree. The outcomes of prescribing were observing the effects of the medicines, which in turn influenced whether prescribing was reviewed and continued unchanged, continued but amended, reinitiated if the medicine had been previously stopped or discontinued (the process of deprescribing). Conclusion This study has provided unique insights into carers' experiences and perspectives about the use of anticholinergic medications in PLWD, highlighting how commonly these medications are prescribed for PLWD and carers' concerns about their use. There is a clear need for carers and PLWD to receive information about these medicines and healthcare professionals to consider how to optimise the use of these medicines to avoid adverse effects. Patient or Public Contribution This work was informed by findings from previous research studies focusing on optimising medicine use for people with dementia in primary care, in which interviews were conducted with PLWD, their carers and primary healthcare professionals. Although not strictly patient and public involvement, we utilised the feedback provided by key stakeholders to inform the research questions and aim/objectives of this study. Shawaqfeh B , Hughes CM , McGuinness B , Barry HE . Carers' experiences and perspectives of the use of anticholinergic medications in people living with dementia: analysis of an online discussion forum . Health Expect . 2024 ; 27 : e13972 . 10.1111/hex.13972
AUTHOR CONTRIBUTIONS Bara'a Shawaqfeh : Conceptualisation; methodology; investigation; formal analysis; writing—original draft; writing—review and editing; visualisation; project administration; funding acquisition. Carmel Hughes : Conceptualisation; methodology; formal analysis; writing—original draft; writing—review and editing; visualisation; supervision. Bernadette McGuinness : Conceptualisation; writing—review and editing; supervision. Heather Barry : Conceptualisation; methodology; formal analysis; writing—review and editing; visualisation; supervision. CONFLICT OF INTEREST STATEMENT The authors declare no conflict of interest. ETHICS STATEMENT Ethical approval was granted for this study by the Faculty of Medicine, Health and Life Sciences Research Ethics Committee, QUB, on 4 January 2022 (Reference MHLS 21_160). Supporting information
ACKNOWLEDGEMENTS We thank the Dementia Talking Point Forum for granting access to the data. Bara'a Shawaqfeh is supported by a PhD Scholarship from Al‐Zaytoonah University of Jordan. DATA AVAILABILITY STATEMENT Research data are not shared. Access to the data is not available. Permission to use this data for this study was granted to us from the Talking Point discussion forum (established by the Alzheimer's Society), which is the custodian of the data. All enquiries regarding access to data should be directed to the Alzheimer's Society https://forum.alzheimers.org.uk/ .
CC BY
no
2024-01-16 23:43:51
Health Expect. 2024 Jan 15; 27(1):e13972
oa_package/c3/e4/PMC10788817.tar.gz
PMC10788818
38226128
Introduction Fahr syndrome is a rare neurodegenerative disorder (prevalence of < 1/1.000.000), characterized by calcium deposition on basal ganglia and other regions of the brain, with cellular death associated [ 1 - 2 ]. Its etiology may derive either from phosphocalcium metabolism disorders, with hypoparathyroidism at the top of the differential diagnosis list, as well as from genetic abnormalities, thus designated as Fahr disease [ 3 - 4 ]. The treatment of this nosologic entity is mostly symptomatic but the absence of a rapid diagnosis, given its progressive installment, leads to life quality degradation [ 5 ].
Discussion This clinical case presents the typical clinical and radiological features of Fahr disease, a rare disorder characterized by calcium deposition on the brain without disorders in phosphocalcium metabolism [ 1 - 2 ]. It typically presents classically around the age of 40-50 [ 1 - 2 ], like in our case. The most common differential diagnoses are iatrogenic parathyroidectomy, radiotherapy, local infiltration, vascular events, and metastasis. Stroke may arise as one possible hypothesis given the similar clinical signs and symptoms; however, some chronicity is usually present from the onset, allowing for differentiation, as demonstrated in our case [ 3 - 4 ]. As a rare neurodegenerative disorder, little is written about it, and treatment of this nosologic entity is mostly symptomatic, with a progressive evolution leading to a degradation in the quality of life [ 5 ]. Typically, at this stage of onset, the syndrome is characterized by seizures, and psychiatric manifestations are disturbing, while dementia and movement disorders may take more time to develop [ 6 ], which differs somewhat from what was observed in our case. Although it is an incurable disease, new and improved strategies focused on symptomatic relief and prevention through genetic counseling are enhancing the quality of life for patients [ 7 ].
Conclusions In conclusion, this case serves as a classical presentation of Fahr's disease, with a specific emphasis on its distinctive imaging features. By presenting this typical scenario, we aim to familiarize healthcare professionals with the presentation of Fahr's disease, ensuring that its recognition in emergency departments becomes more commonplace. It is crucial to underscore that this condition, as evidenced in this case, is an incurable condition that significantly impacts patients' lives, leading to disability. This highlights the imperative for further studies focused on treatment strategies and the accumulation of evidence surrounding this rare disease. Increased research efforts are essential to better understand this disorder, ultimately paving the way for improved patient outcomes and management.
Fahr syndrome is a rare neurodegenerative disorder, characterized by calcium deposition in the brain. It is usually associated with phosphocalcium metabolism disorders, like hypoparathyroidism, or with genetical predisposition, as seen in Fahr disease. Given the wide array of differential diagnoses medical awareness should be emphasized to prompt diagnosis and management. In this case, we depict a classical presentation of Fahr syndrome, highlighting the differential diagnosis with stroke given the similar clinical signs and symptoms, although pointing out the distinct radiological presentation that raises clinical suspicion for this entity.
Case presentation A 53-year-old man presented to the emergency department (ED) with worsening dysarthria and an acute onset of diminished strength in his right upper limb. The patient reported a five-year history of chronic speech disturbances with a gradual onset, particularly affecting the dictation of certain words and sentences. The dysarthria had worsened on the day of the presentation, rendering him unable to speak properly and prompting him to seek care in the ED. Additionally, the patient mentioned reduced strength in the right upper limb, mainly in the palmar region, preventing him from holding small objects. This decrease in strength had an acute onset on the same day. On neurologic examination, moderate dysarthria was noted, with no discernible difference in limb strength, and a rating of five out of five on the scale. A cranioencephalic computed tomography was performed and showed multiple calcifications in semi-oval centers, radiated coronas, basal ganglia, thalamus, brainstem, and cerebellar hemispheres (Figures 1a , 1b ). The patient was hospitalized for etiological clarification, MRI excluded ischemic stroke, and no other risk factors were found besides dyslipidemia. The analytic study had no alterations regarding phosphocalcium metabolism (Table 1 ). The patient started language therapy during hospitalization and was referred to consult internal medicine and neurology to continue surveillance every six months.
CC BY
no
2024-01-16 23:43:51
Cureus.; 15(12):e50616
oa_package/ea/84/PMC10788818.tar.gz
PMC10788819
38226117
Introduction Oesophageal intraluminal pseudodiverticulosis is a rare benign condition of the oesophageal wall characterised by tiny flask-shaped outpouching lesions [ 1 , 2 ]. The most common presenting symptom is dysphagia that is often accompanied by oesophageal strictures. Chronic alcoholism, diabetes mellitus and gastro-oesophageal reflux disease have been reported as associated comorbidities. The condition was first described by Mendl et al. in 1960 [ 3 ]. The pathophysiology of the disease is still unclear. Bender and Haddad suggested that diverticula formation might result from dysmotility associated with oesophagitis [ 4 ]. The diagnosis requires endoscopy and/or radiological imaging tests such as a computed tomography scan or barium swallow.
Discussion Oesophageal intramural pseudodiverticulos is mainly reported in men in the fifth and sixth decades of life [ 6 ]. Although the pathogenesis of the disease is not yet established, it is believed that chronic inflammation of the oesophagus due to gastroesophageal reflux disease may lead to the obstruction of the excretory glands and fibrosis of the submucosa [ 7 ]. It can be associated with diabetes mellitus, oesophageal dysmotilities, and chronic harmful alcohol use. Dysphagia is the most common presenting symptom in patients with oesophageal intraluminal pseudodiverticulosis. Only 20% of patients have diagnosis confirmed at endoscopy. Barium swallow radiography is a more sensitive diagnostic method. Imaging studies such as computed tomography could show diffuse oesophageal thickening. The most common complication of this condition is oesophageal stricture, accounting for 80%-90% of the reported cases [ 8 ]. Treatment involves treating the underlying condition such as gastroesophageal reflux disease and alcohol withdrawal. Endoscopic dilation improves symptoms of patients with oesophageal strictures [ 9 ].
Conclusions The management of oesophageal intraluminal pseudodiverticulosis is dependent on the patient’s symptoms. Around 10% of patients do not require treatment. The use of proton pump inhibitors can relieve symptoms of reflux oesophagitis. Oesophageal stricture is a common complication of pseudodiverticulosis and dilation may be required. Interestingly, in this case, the patient presented with a history of food bolus impaction and we did not identify oesophageal stricture at endoscopy.
Oesophageal intraluminal pseudodiverticulosis is a rare benign condition of the oesophageal wall, with not many cases reported in the literature. Usually, patients present with dysphagia and food impaction in association with a proximal oesophageal stricture. Pathogenesis of the disease is not yet established; hence, it remains important to raise awareness about this distinctive pathology. Here, we present a case of a 62-year-old male admitted to Aberdeen Royal Infirmary, Scotland, UK, with a history of food bolus. Upper gastrointestinal endoscopy revealed food bolus impaction with underlying oesophageal pseudodiverticulosis in the distal two-thirds of the oesophagus.
Case presentation A 62-year-old male presented to the emergency department 14 hours after food bolus impaction of a piece of beef steak. His main symptom was complete dysphagia. He reported no significant past medical history besides high blood pressure and gastro-oesophageal reflux disease. He had a 20 pack-year smoking history and a prior history of alcohol consumption. He had presented twice in the past with food bolus obstruction requiring endoscopic removal in the past three years. At presentation, he was hemodynamically stable; physical examination was unremarkable. Initial laboratory investigations showed a normal haemoglobin level, eosinophil count and C-reactive protein level. The patient was kept nil by mouth, and initial management included intravenous fluids, and IV hyoscine butylbromide, which unfortunately failed to resolve the impaction. The gastroenterology team was consulted and upper gastrointestinal (GI) endoscopy was arranged on the same day. The initial departmental endoscopy under conscious sedation revealed food bolus impaction at 20 cm from incisors. Different retrieval methods including foreign body net, rat tooth forceps and snare were not successful. CT of the thorax with contrast excluded oesophageal perforation. Endoscopy under general anesthesia was arranged to allow for more time to the procedure. This resulted in successful food bolus resolution. An unusual pathology was identified below the food bolus impaction area. Tiny flask-like outpouching lesions were noted in the middle and lower oesophagus with the appearance of pseudodiverticulosis seen (Figure 1 ) [ 5 ]. The patient was already on proton pump inhibitors and was discharged home next day with no complications. A follow-up endoscopy is planned to review the area of bolus impaction to exclude early stricture formation.
CC BY
no
2024-01-16 23:43:51
Cureus.; 15(12):e50617
oa_package/4e/28/PMC10788819.tar.gz
PMC10788820
38226135
Introduction Hirschsprung disease (HD) and intestinal neuronal dysplasia type B (IND-B) are two neuromuscular gastrointestinal diseases that appear within the same clinical spectrum, with severe constipation in childhood, accompanied or not by complications, such as acute intestinal obstruction or enterocolitis [ 1 ]. The differentiation between these two diseases can only be established by the histopathological analysis of rectal biopsies [ 2 , 3 ]. HD is defined by the absence of ganglion cells in the submucosal and myenteric plexuses of the enteric nervous system, whereas IND-B is characterized by hyperplasia of the submucosal nerve plexuses [ 1 - 3 ]. The treatment for HD is surgery through colorectal pull-through [ 4 ]. In contrast, patients diagnosed with IND-B who have no complications must receive conservative treatment with laxatives [ 5 ]. Although the signs and symptoms that comprise the clinical presentation of these two diseases are well established in the literature, no studies have specifically compared the clinical characteristics presented in a case series of patients with HD and IND-B [ 6 - 9 ]. Knowledge of the specificities of the clinical presentation of each disease, including demographic variables and the signs and symptoms, can deepen the knowledge about the clinical picture, improving the diagnostic suspicion and the initial management of these two diseases. Therefore, we aimed to compare the clinical and demographic aspects presented by patients with HD and IND-B at the time of their histopathological diagnosis.
Materials and methods This single-center, retrospective, analytical, and comparative study was approved by the Research Ethics Committee of the Botucatu Medical School - São Paulo State University (UNESP) under protocol number 55763722.3.0000.5411. We included 119 patients aged 0-15 years diagnosed with HD or IND-B through histopathological analysis of rectal biopsies from 1998 to 2010. The histopathological diagnosis of HD was established based on the absence of ganglion cells in the distal rectum’s submucosal and myenteric nervous plexuses [ 10 , 11 ]. The histopathological diagnosis of IND-B was established according to the morphological criteria proposed by the Frankfurt Consensus (1990) [ 12 ]. The patients were stratified into two groups according to the results of the rectal biopsies: the HD group, including 69 patients with a diagnosis of HD, and the IND-B group, including 50 patients diagnosed with IND-B. Information was retrieved from the patients' medical records. The following data were retrieved and tabulated: 1) clinical and demographic information: sex, gestational age, birth weight, age at symptom onset, and age at diagnosis; 2) information on the clinical picture in the neonatal period: the presence of intestinal symptoms, delay in meconium elimination, and presence of associated malformations; 3) clinical information present at the time of histopathological diagnosis: symptoms related to bowel habits (defecation frequency, episodes of painful or strained defecation, evacuation bleeding, abdominal pain, presence of fecaloma, need for bowel washout, and fecal incontinence), episodes of enterocolitis and acute intestinal obstruction, the need for urgent surgery and failure to thrive; and 4) results of diagnostic screening tests: anorectal manometry and barium enema. A comparative analysis between groups was performed. Numerical data are presented as the mean values ± standard deviations or median (interquartile deviation), according to the type of data normality distribution previously evaluated using the Kolmogorov-Smirnov test. Proportions are presented as percentages and their respective confidence intervals. Comparisons between the groups were performed using different statistical tests according to the type of variables analyzed. Nominal variables were analyzed using Fisher’s exact test or the chi-square test with Yates correction. Different proportions were compared using a binomial test. Continuous numerical variables with nonparametric distributions were compared using the Mann-Whitney U test, and those with parametric distributions were compared using Student's t test. Relationships and differences were considered statistically significant at p < 0.05. Analysis was performed using SPSS v. 22.0 (IBM Corp, Armonk, NY, USA). The results were graphically summarized using a Venn diagram built using the Adobe CreativeCloud ® tool.
Results This study included 119 patients, of whom 88 (74.0%) were males and 31 (26.0%) were females. The median age at symptom onset was 1 (15.5) day, and 76.8% of patients had symptoms in the neonatal period. The median age at the time of histopathological diagnosis was 83 (1,083) days. Sixty-nine (58.0%) patients were diagnosed with HD, and 50 (42.0%) were diagnosed with IND-B, thus composing the two comparison groups (HD group and IND-B group). Comparison between the HD and IND-B groups Fifty-three (76.8%) patients with HD and 35 (70.0%) patients with IND-B were males. There was no significant difference in the distribution between sex (p = 0.532; chi-square test) or age at symptom onset (HD: 0 (6.5) days vs IND-B: 2.5 (90) days; p = 0.144; Mann-Whitney U test). However, symptom onset in the neonatal period was significantly more prevalent (p = 0.03; chi-squared test) among patients with HD (86.3%) than among patients with IND-B (66.7%). The comparison between patients in both groups regarding clinical characteristics presented in the neonatal period is shown in Table 1 . Delayed meconium elimination (p < 0.001) and the presence of intestinal symptoms (p = 0.001) were associated with the diagnosis of HD. Patients with IND-B were significantly older at diagnosis than those in the HD group (HD: 50 (275) days vs IND-B: 365 (1,417) days; p = 0.002; Mann-Whitney U test). At the time of diagnosis (Table 2 ), most patients with HD presented failure to thrive (p = 0.02) and a history of previous episodes of enterocolitis (p = 0.049). In contrast, evacuation bleeding was more common in patients with IND-B (p = 0.007). There was no significant difference in the maximum number of days without defecation (HD: 9.62 ± 9.26 days vs IND-B: 10.87 ± 8.09 days; p = 0.508; t-test). Regarding the main clinical picture presented at diagnosis (Table 3 ), constipation was more common in patients with IND-B (p = 0.004), and acute abdominal obstruction was more common in patients with HD (p = 0.031). Because of this, the need for urgent surgery was significantly higher (p < 0.001; chi-square test) in the HD group (62.0 [48.1%-74.1%]) than in the IND-B group (22.9 [13.3%-36.5%]). Complementary diagnostic screening tests (Table 4 ) showed that the absence of the rectoanal inhibitory reflex on anorectal manometry was more common in HD patients (p = 0.028). The Venn diagram in Figure 1 summarizes the distribution of the main clinical and demographic aspects found in comparing patients with HD and IND-B.
Discussion Our study observed a male predominance in patients with HD and IND-B. The prevalence of males in HD is well-defined, with proportions ranging from 3:1 to 4:1 [ 12 ]. This ratio is influenced by the extent of the aganglio­nic segment, with percentages ranging from 1:1 to 2:1 in long forms and 0.8:1 in total colonic aganglionosis [ 13 , 14 ]. The number of patients born prematurely or with low birth weight was limited and did not differ between the two diseases. HD and IND-B typically affect full-term infants with adequate birth weight [ 9 , 13 , 15 ]. The onset of symptoms in the neonatal period, including delayed meconium passage, was more common in patients with HD than in those with IND-B. Clinical manifestations of HD depend directly on the extent and degree of spasticity in the aganglionic segment [ 16 ]. In most cases, symptoms appear in the first few days of life, with changes in bowel habits, failure to thrive, flatulence, and vomiting. Up to 90% of cases are present in the neonatal period and are characterized by intestinal obstructions [ 13 , 17 ]. Delay in meconium passage in the first 24 to 48 hours of life is reported in up to 90% of patients with HD [ 13 , 15 ]. A minority of patients do not have symptoms during the neonatal period and experience severe constipation throughout childhood [ 13 ]. Regarding IND-B, some studies have shown that most patients have symptoms in the first year of life but not in the neonatal period or subsequent months [ 7 , 18 ]. Delayed meconium passage can also be observed; however, it is less common than in HD [ 7 ]. More severe conditions due to acute or chronic complications were higher in patients with HD than in those with IND-B. Patients with HD showed more failure to thrive, episodes of enterocolitis, and acute intestinal obstruction, requiring a more significant number of urgent surgical approaches. Enterocolitis associated with HD is the most severe clinical complication related to the disease and can evolve into dehydration, sepsis, and death [ 13 , 19 ]. Its incidence in HD ranges from 12% to 58% of patients and may occur before or after surgical treatment. This complication can reach mortality rates of 1%-10%, more frequently in newborns, before definitive surgery [ 19 , 20 ]. Clinical pictures compatible with a diagnosis of enterocolitis in patients with IND-B have been described in some cases; however, they are less common in those with HD [ 7 , 8 ]. HD represents 20%-25% of cases of intestinal obstruction in the neonatal period [ 21 ]. Acute obstructive symptoms can also occur in older children. Many of these patients do not respond positively to the initial measures of clinical treatment and require urgent surgical approaches with temporary colostomies [ 22 ]. Evolution to acute intestinal obstruction can also occur in patients with IND-B; however, it is less common than in HD. It is the most frequently reported complication in patients with IND-B and is often the determining factor for surgical treatment [ 5 ]. In contrast, chronic constipation as the main symptom was more common in patients with IND-B than in those with HD. Constipation of varying severity is the most common clinical condition in patients with IND-B [ 23 ]. Most of these cases evolve insidiously, with little response to conventional treatments for constipation and without acute complications. This corroborates that we observed that the age at diagnosis was significantly higher in patients with IND-B than in those with HD. Less severe symptoms, with chronic evolution and without acute complications, may lead to a delay in referral to specialized services and, consequently, to a delay in diagnosis. Publications in the last two decades have highlighted the increasing number of IND-B cases diagnosed in adults, some with symptoms of constipation since childhood [ 24 - 26 ]. In our study, there was a significant difference in bleeding evacuation among the chronic symptoms related to defecation, which was more common in patients with IND-B than in those with HD. This type of bleeding is associated with the evacuation of bulky and hardened stools that lead to injuries to the perianal mucosa and can be considered a sign of local severity. Among the complementary tests usually used for the initial diagnostic investigation of HD, we observed a significant difference in the absence of the rectoanal inhibitory reflex on anorectal manometry, which was more common in patients with HD than in those with IND-B. Anorectal manometry is considered an initial screening method in the investigation of HD [ 11 ], with specificity rates of 94.2%, a sensitivity of 88.4%, and false-positive results that range from 0% to 62%. If the result is the absence of the recto-anal inhibitory reflex, the child should undergo a rectal biopsy for a diagnostic conclusion [ 11 ]. However, using anorectal manometry in the diagnostic workup for IND-B remains controversial, with variable results [ 7 , 23 , 27 , 28 ]. It should be noted that in our series, the absence of reflex occurred in 62.5% of the patients with IND-B. During the diagnostic workup for HD, this result led to rectal biopsies, which led to the diagnosis of IND-B. Identifying the transition zone between the spastic-aganglionic segment and the dilated colon in the barium enema is another method used in the diagnostic screening for HD, with sensitivity and specificity rates of 73% and 90%, respectively [ 11 ]. Although 75% of neonates with HD present with a transition zone, the absence of this sign does not exclude the possibility of aganglionosis [ 29 ]. However, using these radiological findings in the initial diagnostic investigation of patients with IND-B remains controversial. Most patients with IND-B do not exhibit barium enema-specific radiological features. Like most patients with intestinal constipation, there is usually an increase in the caliber of the rectum and sigmoid. A minority of patients may present with conical colon dilation similar to that observed in HD [ 7 , 27 ]. In our study, there was no significant difference in identifying the transitional zone between patients with HD and those with IND-B. However, it should be noted that almost half of the patients with IND-B did not present with this finding on barium enema. Considering that barium enema is usually focused on diagnostic screening for HD and that identifying the transition zone is the criterion for performing a rectal biopsy, the absence of this finding could lead to delays or errors in diagnosing patients with IND-B. This study has limitations, such as its retrospective design based on clinical information obtained from a single center medical records. Specific information from clinical history and complementary tests were unavailable; therefore, the number of patients included in each comparative analysis varied. In addition, most barium enema radiographic images were not available for reanalysis. Therefore, this study was performed using only examination reports. However, this is the first study to specifically compare the clinical picture of patients with HD and those with IND-B. The number of patients included can also be considered a strength in this study because it dealt with two rare diseases.
Conclusions Although HD and IND-B are part of the same clinical spectrum and the differentiation between these two diseases depends on the histopathological analysis of rectal biopsies, a more profound knowledge of particularities about the clinical presentation of each of these diseases can help to direct the diagnostic suspicion and the initial management. Our study identified two different clinical pictures, one for each disease, based on significant differences in the comparative analyses. In most cases, patients with HD experienced symptoms in the neonatal period, with delayed meconium passage. In addition, they had more severe conditions characterized by acute complications such as enterocolitis and acute abdominal obstruction and chronic complications such as failure to thrive. Therefore, they commonly require urgent surgery. In most cases, patients with IND-B were diagnosed late with chronic and insidious conditions of intestinal constipation, refractory to conventional treatment. The complications presented by these patients were limited and related to evacuation symptoms such as evacuation bleeding.
Background: Although the signs and symptoms that comprise the clinical presentation of Hirschsprung disease (HD) and intestinal neuronal dysplasia type B (IND-B) are well established, no studies have specifically compared the clinical characteristics presented by patients with these diseases. We compared the clinical pictures of patients with HD and IND-B at the time of histopathological diagnosis. Methods: This was a single-center, retrospective, analytical, and comparative study. We included 119 patients aged 0-15 years diagnosed with HD or IND-B. Information from the medical records of these patients was retrieved to obtain demographic and clinical information at the time of diagnosis. The data were compared statistically according to the characteristics of the variables. Results: Sixty-nine patients (58.0%) were diagnosed with HD, and 50 (42.0%) had IND-B. The HD group had significantly more individuals with symptom onset in the neonatal period (p = 0.001), delayed meconium clearance (p < 0.001), failure to thrive (p = 0.02), and acute complications, such as enterocolitis (p = 0.049) or acute abdominal obstruction (p = 0.031), more commonly requiring emergency surgery (p < 0.001). Patients with IND-B were diagnosed at a significantly older age (p = 0.002). They more commonly had chronic constipation as their main symptom (p = 0.004), with local complications, such as evacuation bleeding (p = 0.007). Conclusion: There were significant differences between the clinical pictures of patients with HD and IND-B. Knowledge of each disease’s most common signs and symptoms can help direct diagnostic susception and initial management.
Data are available on reasonable request. The data are stored as de-identified participant data which are available on request to [email protected].
CC BY
no
2024-01-16 23:43:51
Cureus.; 15(12):e50618
oa_package/47/21/PMC10788820.tar.gz
PMC10788822
38109480
Introduction Direct air capture (DAC) of CO 2 from the atmosphere based on adsorption processes has garnered tremendous interest as a potentially scalable negative emissions technology. A large number of publications have reported the DAC performance of adsorbents at ambient conditions (i.e., temperatures >20 °C), but there has been only limited investigation of lower temperatures. 1 Compared to DAC under ambient conditions, DAC at colder temperatures may allow the usage of physisorbents with lower CO 2 heat of adsorption and hence enable DAC processes with lower energy consumption. In addition, the lower absolute humidity at colder temperatures could potentially reduce the energy consumed for water desorption, and it may be advantageous to perform DAC at low temperatures to reduce the oxidative degradation rate of PEI. 2 , 3 This research gap hampers the rapid development and deployment of adsorption-based DAC processes in many areas of the world where the annual average temperature is below the typical temperature of research laboratories (20–30 °C). Song et al. investigated the potential of using commercially available zeolite adsorbents for DAC under subambient conditions. 4 It was found that a predrying step before adsorption could be considered at cold temperatures, whereas this approach would be cost prohibitive at higher temperatures where water vapor content in the air can be much higher. Adsorbents with amine functionalities that are covalently grafted to or physically entrapped in the pores of support materials, such as silica, cellulose aerogel, or metal–organic frameworks (MOFs), have demonstrated encouraging DAC performance under both dry and humid conditions at ambient laboratory temperatures. However, their performance at subambient conditions has been underexplored. 5 − 11 Rim et al. studied the DAC performance of supported poly(ethylenimine) (PEI) and tetraethylenepentamine (TEPA) in MIL-101(Cr) under subambient conditions. 12 When the amine loading is moderate, the amine–CO 2 interactions have moderate enthalpies of adsorption, akin to weak chemical interactions, providing stable working capacities (up to 0.75 mmol g –1 ) with narrow temperature swing windows (e.g., – 20 to 25 °C). Similar to previous works focusing on ambient temperatures or above, 13 enhancement of the subambient DAC performance of amine-impregnated MIL-101(Cr) was also observed under humid conditions. A recent study showed that the high surface area to pore volume ratio of MIL-101(Cr) results in weak chemisorption (the formation of carbamic acid) of CO 2 in MIL-101(Cr)-supported TEPA, which requires less energy consumption for CO 2 desorption compared to the case of strong chemisorption of CO 2 (the formation of carbamate). 14 These results suggest the intriguing possibility of using amine-based adsorbents for DAC under cold conditions with lower energy consumption relative to hot and humid climates. Because of the low concentration of CO 2 in air, large quantities of air must be processed to capture significant amounts of CO 2 . For this reason, it is important to translate the adsorbents from the initially studied powder form into other geometries and structures to achieve low pressure drops along the sorption bed without significantly increasing mass transfer resistances or compromising the uptake capacities. Prior works have explored a variety of forms of adsorbents including pellets, 15 − 17 fibers, 18 flat sheets, 19 and monoliths for DAC. 20 However, all known structured DAC contactor studies focus on ambient or warmer testing conditions. Recently, additive manufacturing, or 3D printing, has emerged as a nascent technology to fabricate adsorption contactors for a variety of separation applications. 21 Direct ink writing (DIW) is the most common 3D printing approach that has been used to prepare monolithic sorbent structures, where ink containing the adsorbent powder is continuously extruded out of a printer head (nozzle) and deposited using precise spatial coordinates predetermined by 3D printing programs. Compared with conventional shaping methods such as pelletizing, molding, and extrusion, 3D printing can potentially afford better spatial manufacturing resolutions to allow the design and manufacture of novel sorption contactors with complex engineered geometries. If carefully designed, such geometries can potentially enhance the mass and heat transfer performance of these contactors, as suggested by computational simulations. 22 To date, a wide variety of adsorbent particles including porous carbons, 23 , 24 zeolites, 25 − 27 MOFs, 28 − 31 and covalent–organic frameworks (COFs) 32 have been formulated into printable inks for 3D printing of monolithic structures for chemical separations. For example, Pereira et al. used 3D printing to fabricate a monolith containing zeolite 13X and carbon black particles. 25 Because of the short distance between carbon and zeolite 13X, zeolite 13X can be quickly regenerated by resistance heating of carbon particles when a voltage is applied to the contactor, which may allow for higher energy efficiency compared to typical sorbent regeneration methods based on heat exchange liquids or vapors (e.g., indirect steam stripping). The Rezaei group explored a series of ink formulations for 3D printing based on water, clay, poly (vinyl alcohol) (PVA), and porous adsorbents. 26 This formulation has been used to fabricate monoliths of zeolites and MOFs for not only gas sorption but also heterogeneous catalysis. 28 , 30 , 33 , 34 For example, Lawson et al. used this ink system to prepare MIL-101(Cr) monoliths and impregnate amines in these monoliths for CO 2 removal in an enclosed environment. 30 More recently, this ink formulation was adopted to fabricate monoliths containing two types of nanoparticles: Fe 3 O 4 and Ni-MOF-74. 35 Induction heating of the Fe 3 O 4 nanoparticles enabled the rapid heating and regeneration of the adsorbent monolith. In another example, Grande et al. formulated a nonaqueous ink containing UTSA-16, hydroxypropyl cellulose, boehmite AlO(OH), and isopropyl alcohol with suitable rheological properties to fabricate UTSA-16 monoliths for CO 2 capture. 31 In situ synchrotron XRD-CT data were collected to reveal insights into the spatial and temporal evolution of UTSA-16 in the monoliths during CO 2 sorption. Together, these examples showcase the versatility of these 3D printing methods to fabricate monoliths with complicated compositions. Despite the high sorbent loadings, some monoliths exhibited little interparticle porosity, which could be a source of the observed mass transfer resistances. 30 Solution-based additive manufacturing (SBAM), another method of DIW 3D printing, utilizes phase separation of polymer solutions to generate macropores in sorbent monoliths. 36 In this approach, viscous polymeric dopes composed of polymers, solvent, and nonsolvent are used as the inks for printing. Once the ink is extruded out of the nozzle and deposited on a substrate, the evaporation of volatile solvents leads to spinodal decomposition of the deposited polymeric filaments, which not only increases the storage modulus of the filaments for better preservation of filament shape but also affords interpenetrated polymer-lean and polymer-rich phases within the filaments. The polymer-lean phase can be subsequently removed by solvent exchange after printing to generate macropores that are beneficial for rapid mass transfer inside of the filaments. Zhang et al. showed that SBAM is applicable to printing a variety of polymers including cellulose acetate (CA), Matrimid, and polymers of intrinsic porosity PIM-1. 36 , 37 PIM-1 was fabricated into air contactors by SBAM with superior mass transfer efficiency and toluene uptake capacities compared to those of contactors using PIM-1 in the form of pellets and fibers. In addition to solvent evaporation, spinodal decomposition could also be triggered by the diffusion of nonsolvent vapor into the deposited polymer filaments. For example, Xu et al. controlled the internal porosity and layer adhesion of printed filaments composed of a mixture of 2-pyrrolidinone, poly(sulfone), poly(styrene)- block -poly(acrylic acid), and carbon nanotubes by carefully modulating the humidity level in the printing environment. 38 The 3D printed structured sorbents were subsequently modified by poly(ethylenimine) (PEI) and terpyridine for the efficient removal of metal ions from water under dynamic flow conditions. To date, DIW using an ink based on polymeric solutions has not been employed to fabricate structures that contain high loadings of adsorbents for chemical separations. The incorporation of solid particles will change the rheological properties of the ink, and solid particles may agglomerate to clog the printer nozzles if these particles are not well dispersed before printing. 39 In addition, because porous adsorbent particles typically comprise greater volume fractions in the ink than nonporous particles at the same weight loading, the changes in rheological properties (e.g., viscosity) of the ink brought on by sorbents will be much more significant compared to the changes caused by adding nonporous particles. Therefore, it is challenging to print structures with high weight loadings of adsorbents, which is important for minimizing any decline in separation performance due to the introduction of dead weight into the printed structures. In this work, we describe the utilization of SBAM to prepare sorption contactors for DAC and explore the sorption performance of the contactors under primarily subambient conditions. Cellulose acetate (CA) is used as the macroporous support polymer for these contactors, and microcrystals of zeolite 13X and MOF MIL-101(Cr) were distributed evenly within this porous polymer matrix. We show that the gravimetric loading of adsorbents can be as high as 70 wt % in these cases, and the DAC performance of amine-loaded MIL-101(Cr) sorbents under subambient conditions is well preserved in the polymeric frameworks.
Experimental Methods Materials Chromium(III) nitrate nonahydrate Cr(NO 3 ) 3 ·9H 2 O (99%), dimethylformamide (DMF, ACS grade), dimethylacetamide (DMAc, ACS grade), cellulose acetate (CA, M n ∼ 50000 by GPC, 39.7 wt % acetyl), and branched poly(ethylenimine) (PEI) ( M w 800 by MS) were purchased from Sigma-Aldrich. Terephthalic acid (H 2 BDC) was purchased from Acros Chemicals. Methanol (ACS grade), hexane (ACS grade), and acetone (ACS grade) were purchased from BDH Chemicals. Hexane (HPLC grade) was purchased from Fisher Scientific. Cylinders of N 2 (99.999%), bone dry CO 2 (99.9%), He (99.999%), and 400 ppm of CO 2 balanced in N 2 or He were purchased from Airgas. Synthesis of MIL-101(Cr) MIL-101(Cr) was synthesized hydrothermally based on the recipe from the literature. 40 First, 64 g of Cr(NO 3 ) 3 ·9H 2 O, 27.1 g of H 2 BDC, and 160 mL of HNO 3 aqueous solution (1 N) were added to 640 mL of deionized (DI) water. The mixture was stirred for 0.5 h followed with sonication in a water bath for 0.5 h. The mixture was subsequently transferred to a 2 L Teflon-lined autoclave and heated at 200 °C for 16 h, followed by slow cooling to room temperature. After the synthesis, large colorless needlelike crystals were removed, and the dark green powder was collected using a centrifuge. The dark green powder was then sequentially washed with DMF (0.9 L, three times), MeOH (0.9 L, twice), and acetone (0.9 L, once). Each washing step lasted for 1 day. The resulting product powders were dried under high vacuum (about 10 mTorr) at 120 °C overnight for further analysis and monolith preparation. Preparation of Solution-Based Additive Manufacturing (SBAM) Inks for Printing MIL-101(Cr) Monoliths A typical procedure to prepare the SBAM ink for printing MIL-101(Cr) monoliths with 60 wt % sorbent loading is as follows. First, MIL-101(Cr) powder was activated at 120 °C under vacuum overnight to remove residual solvents in the pores. After activation, the powder was sealed in a jar containing a mixture of DMAc, acetone, and H 2 O with the same compositions as the SBAM ink for 7 days to saturate the pores with solvent vapor. The vapor loading in MIL-101 was determined by thermogravimetric analysis (TGA). Second, a stock solution of acetone (30.8 wt %), DMAc (46.3 wt %), and DI H 2 O (22.9 wt %) was prepared, and 0.3 g of CA was dissolved in 2.16 g of the stock solution to prepare a prime dope. Third, vapor-saturated MIL-101(Cr) (2.2 g, contains 50 wt % vapor of the mixed solvent) was dispersed in 8.42 g of the stock solution by sonication in a water bath for 1.5 h before combining this dispersion dope with the prime dope. More vapor-saturated MIL-101(Cr) (2.4 g) was added to the mixture under stirring, and the mixture was further sonicated in a water bath for 1.5 h and homogenized by the Branson 450 digital sonifier with an output of 20% amplitude for a sonication time of 2 min 20 s (20 s pulse with 20 s interval). Last, the remaining CA (1.2 g) was added, and the vial containing the final mixture was subsequently put on a roller under an infrared lamp for at least 3 days to homogenize the ink before SBAM. Fabrication of Sorbent Monoliths by SBAM The structures of the sorbent monoliths were typically designed by Fusion 360. The structural files in STL format were imported into Cura, a 3D printing software of Ultimaker, and converted to G-codes for the control of the 3D printing process. G-codes could also be generated by a Python program that uses structural parameters of monoliths, such as channel widths and monolith heights, as inputs. The SBAM 3D printer was modified from a commercial Cartesian 3D printer Creality CR-10 Max ( Figure 1 a). The SBAM ink was extruded from the nozzle by N 2 (69–90 kPa) and deposited on the platform of the 3D printer, during which the nozzle and the platform were not heated (∼23 °C). N 2 pressure and the gap between the printer nozzle and the platform were carefully controlled to facilitate good adhesion between different layers of filaments. The x – y translation speed of the nozzle was set to 1 cm s –1 during printing. After printing, the monolith was immersed in DI H 2 O for 3 days (water refreshed every day) to achieve complete phase inversion. The monolith was further immersed in methanol and hexane (ACS grade) each for 3 days during which solvent was refreshed every day. Amine Loading into CA/MIL-101(Cr) Monoliths The dual-solvent PEI loading method was adopted from the literature with minor modifications to maximize the driving force for infusion of PEI into the pores of MIL-101(Cr). 41 A typical procedure for loading PEI into CA/MIL-101(Cr) monoliths is described below. CA/MIL-101(Cr) monoliths were first activated at 100 °C under a vacuum overnight to remove residual solvents in the pores. After cooling to room temperature, the monoliths (0.95 g) were placed on a holder in VWR straight sided jars ( Scheme S1 ), to which 224 mL of hexane (HPLC grade) was added. After stirring for 5 min, 9.4 g of 33 wt % PEI/MeOH solution was added dropwise into the hexane solution under vigorous stirring. The monoliths were taken out of the solution after 24 h and dried in a fume hood overnight. The monoliths were then put in a 50 mL beaker, washed by MeOH (24 × 3 mL, 5 min for each washing step, during which the beaker was gently shaken), and dried in a vacuum at room temperature before further characterizations. The amounts of the PEI/MeOH solution were varied in the PEI loading step to tune the PEI loadings in CA/MIL-101(Cr) monoliths. CO 2 Adsorption Measurements The equilibrium CO 2 uptake capacities of CA/MIL-101/PEI monoliths were measured volumetrically under dry ambient (25 °C) and subambient (− 20 °C) conditions using a surface area and porosity (SAP) system (autosorb iQ/Quantachrome). About 100 mg of the monolith samples was activated at 110 °C under vacuum for 3 h before measuring CO 2 adsorption capacities. During measurement, CO 2 is automatically dosed into the sample cells, and the cell pressures were checked every 1 min until the pressure in the cell was within the P tolerance (regulated by the tolerance value “0” to ensure the tightest match between the desired and achieved relative pressures). Because the SAP system does not provide information about CO 2 uptake kinetics, the CO 2 uptake profiles of CA/MIL-101/PEI monoliths were also gravimetrically measured with a TGA/differential scanning calorimetry (DSC) system (STA 449 F3 Jupiter/NETZSCH) under dry conditions at −20 and 25 °C. About 20 mg of the sample was first activated at 110 °C under a He flow (90 mL min –1 ) for 3 h, followed by thermal equilibration under adsorption temperature conditions (−20 or 25 °C). The sample was then exposed to 400 ppm of CO 2 balanced in He (90 mL min –1 ) for 12 h. The CO 2 uptake profiles of CA/13X monoliths at 50 kPa of CO 2 partial pressure and 30 °C were gravimetrically measured with a TGA Q550 from TA Instruments. About 25 mg of the CA/13X monolith sample was activated at 150 °C for 2 h under a N 2 flow (100 mL min –1 ), followed by thermal equilibration under adsorption temperature conditions (30 °C). The sample was then exposed to 50% CO 2 balanced in N 2 (20 mL min –1 in total) for 2 h. Temperature swing adsorption–desorption cyclic tests were performed for up to 14 cycles with the TGA/DSC system. The CO 2 adsorption step under the 400 ppm of CO 2 balanced in He gas stream (90 mL min –1 ) at −20 °C and the regeneration step under the He gas stream (90 mL min –1 ) at 60 °C were performed for 2 h each. CO 2 Temperature-Programmed Desorption (TPD) TPD experiments were performed by using the TGA/DSC system. After 400 ppm of CO 2 adsorption with the powder sorbents for 12 h at −20 °C, the inlet gas flow was changed to pure He, and the TGA/DSC chamber was purged for 1 h at the adsorption temperature condition. The chamber temperature was then slowly increased at a rate of 0.5 °C min –1 to 110 °C to desorb CO 2 from the powder sorbents. During the entire process, the concentrations of CO 2 and H 2 O of the outlet gas stream were continuously measured by an infrared analyzer LI-COR LI-850 CO 2 /H 2 O gas analyzer to deconvolute the H 2 O and CO 2 desorption profiles. Breakthrough Experiments A schematic illustration of the setup for breakthrough experiments is shown in Scheme S2 . Before breakthrough experiments, the bed of the CA/MIL-101/PEI monoliths was purged by 200 sccm of dry N 2 at 90 °C for 12 h. After activation, the bed was submerged in a bath of a mixture of ethylene glycol and water at predetermined temperatures for at least 0.5 h before starting the breakthrough experiments. For breakthrough experiments under dry conditions, a stream of 400 ppm of CO 2 balanced in N 2 was introduced into the bed, and the concentrations of CO 2 and H 2 O at the outlet of the bed were recorded by a LI-COR LI-850 CO 2 /H 2 O gas analyzer. For breakthrough experiments under wet conditions, the relative humidity of the feed gas was regulated by a LI-COR LI-650 dew point generator. More details about the custom-built fixed bed system and analysis of the breakthrough experiments are available in the Supporting Information .
Results and Discussion Fabrication of Zeolite 13X Monoliths via SBAM Cellulose acetate (CA) was selected as the polymer component of the SBAM printing ink for several reasons. First, CA is readily available and affordable for the potential mass production of sorbent monoliths. Second, the abundant polar functional groups of CA can strengthen the interactions between CA and sorbent particles with polar surfaces, which can hinder loss of sorbent particles from the printed monoliths in postprinting modification steps such as solvent exchange and amine impregnation. Third, solution systems of CA containing solvents and nonsolvents have been extensively reported in the literature for the preparation of CA membranes. The existence of detailed phase diagrams for these systems assists rapid screening and identification of suitable compositions of ternary inks for SBAM. After the polymer component is identified, it is important to select suitable solvents for the SBAM ink, as solvent volatility is critical for controlling the phase separation speed and corresponding textural properties of the printed monoliths. N , N -Dimethylacetamide (DMAc) and water were first selected as the solvent and nonsolvent for CA, respectively. The cloud point technique was employed to determine the binodal line of the CA/DMAc/H 2 O ternary system ( Figure S1 ). Although a room-temperature homogeneous ink in the vicinity of the binodal line was successfully identified, it was incapable of rapid phase inversion (i.e., solidification within 1 min after air exposure), which can be attributed to the slow evaporation of the relatively nonvolatile DMAc. Therefore, acetone was selected as a cosolvent, along with DMAc. Because acetone is highly volatile, its evaporation is expected to quickly shift the ink composition away from the solvent pole in the phase diagram and trigger phase inversion after the composition crosses the binodal line. As shown in Figure S1 , changing pure DMAc to the mixture of DMAc and acetone (1:1 mass ratio) shifts the position of binodal line to the right and allows for higher nonsolvent (H 2 O) content in the ink. We hypothesize that this leads to greater porosity in the printed structures. 36 Zeolite 13X was selected as the model adsorbent for incorporation into CA/DMAc/acetone/H 2 O dopes for monolith preparation by SBAM. The CA content was fixed at 15 wt % (excluding zeolite 13X) to achieve good ink fluidity and viscosity; the detailed procedures to prepare the SBAM inks containing zeolite 13X are available in Section 3 of the Supporting Information . The SBAM inks remained homogeneous for at least 1 week after they were prepared. However, these inks did stratify after long-time settling (∼6 months), which is likely because the gradual aggregation of the adsorbent particles accelerates the settling. A customized 3D printer, as illustrated in Figure 1 a, was built to deposit polymer filaments containing sorbents. The dope deposition rate was controlled by the N 2 pressure in the ink cartridge headspace. The print speeds and layer heights were controlled to allow for good adhesion between different layers. CA/13X monoliths were successfully prepared with excellent fidelity compared to the designed structures ( Figure 1 b). At 50 wt % zeolite 13X loading, the printed monolith had excellent adhesion between different layers of filaments ( Figure 1 c), and zeolite 13X crystals were randomly distributed in the hierarchical porous CA matrix because of spinodal decomposition ( Figure 1 d). However, when zeolite 13X loading increased beyond 60 wt %, the obtained monoliths exhibited dense structures with low porosity ( Figure S2a ). A possible reason for this could be solvent adsorption in the zeolite 13X particles during ink preparation, which would result in phase inversion of the ink before deposition on the printing platform. To prevent undesired preprinting phase separation of the polymer ink due to addition of zeolite 13X, dry zeolite 13X was presaturated with mixed solvent vapor that was in equilibrium with the mixed solvents used for dope preparation. The vapor-loaded zeolite 13X was subsequently used for printing a monolith (denoted as CA/13X) containing 60 wt % zeolite 13X. This monolith exhibited significantly improved porosity ( Figure S2b ) compared to monoliths prepared without the presaturation steps. The loading of zeolite 13X can be further increased to 70 wt % without compromising the printing quality of CA/13X monoliths ( Figure S2c ). Interestingly, no significant differences in BET surface areas and pore volumes were observed between two monolith samples prepared with different zeolite 13X samples (presaturated or not) after taking the different zeolite 13X loadings into consideration ( Figure S2d ). These results suggest that the presaturation step mainly affects the macroporosity of the monoliths. Measurement of the CO 2 uptake kinetics of CA/13X monoliths containing 65 wt % zeolite 13X by thermogravimetric analysis (TGA) reveals rapid CO 2 uptake kinetics ( Figure S3a ) and a CO 2 capacity of 2.6 mmol g monolith –1 (4.0 mmol g zeolite –1 at 50 kPa and 30 °C), which are comparable to the powder zeolite 13X. 42 These results suggest that incorporation of zeolite into the monolith structures has negligible adverse effects on the CO 2 uptake properties of zeolite 13X. Interestingly, an uneven pore size distribution within the printed structure was observed. For example, the bottoms of the filaments from the upper printed layers are typically more porous than the upper surfaces of the filaments from the lower layers ( Figure 1 e). Such heterogeneity in pore sizes persists regardless of our efforts to optimize ink compositions. On the other hand, individual filaments extruded by a syringe using the same ink possess evenly distributed pore sizes ( Figure S3b ). To reconcile these two observations, we speculate that the uneven distribution of pore sizes is related to the 3D printing process. Solvents from newly deposited filaments serve as annealing agents and reduce the surface pore sizes of the filaments from the lower layers. Fabrication of MIL-101(Cr) Monoliths via SBAM Monoliths composed of CA and MIL-101(Cr) crystals [denoted as CA/MIL-101(Cr)] were successfully prepared by SBAM using ink formulations, sorbent presaturation, and 3D printing parameters similar to those for preparing CA/13X monoliths. MIL-101(Cr) powder was first synthesized based on a previously reported large-scale production method. 40 Powder X-ray diffraction (PXRD) patterns reveal the phase purity of the activated MIL-101(Cr) product ( Figure S4a ). The particle sizes of MIL-101(Cr) crystals were less than 1 μm ( Figure S4b ), which is beneficial for preparing a well-mixed polymer ink containing MIL-101(Cr) for 3D printing. The N 2 sorption isotherm at −195.8 °C reveals a BET surface area of 3057 m 2 g –1 and a pore volume of 1.58 cm 3 g –1 of MIL-101(Cr). Because MIL-101(Cr) has a much higher porosity than zeolite 13X (0.34 cm 3 g –1 ), 43 the same weight content of MIL-101(Cr) in the ink will consume larger solvent volume fractions compared to 13X, leading to much greater ink viscosity. Therefore, CA concentrations were adjusted to 12.5 wt % (excluding the mass of sorbent) to achieve a printable viscosity for the ink containing MIL-101(Cr). Furthermore, to minimize potential negative effects of solvent annealing on CO 2 uptake kinetics, the width of the channel walls and monolith walls was set to the width of a single filament, so that each deposited filament will be minimally affected by annealing solvent vapor from peripheral filaments. As shown in Figure 2 a, the shapes of CA/MIL-101(Cr) monolith channels were well-defined, and the channel number per square inch (CPSI) can be as high as 644 in. –2 . TGA suggests the MIL-101(Cr) loading in the monolith was 62 wt % ( Figure S4c ). No dense skin layers were observed on the monolith surface ( Figure 2 b), and MIL-101(Cr) crystals are evenly distributed in the macroporous CA networks without aggregation ( Figure 2 c). PXRD patterns of CA/MIL-101(Cr) exhibit characteristic diffraction reflections for MIL-101(Cr), suggesting that MIL-101(Cr) remains crystalline after SBAM and subsequent solvent exchange processes ( Figure 2 d). The BET surface area of the monolith was calculated to be 1569 m 2 g –1 based on its N 2 adsorption isotherm at −195.8 °C ( Figure 2 e), which corresponds well with the surface area of MIL-101(Cr) and 62% loading. Pore size distribution analysis shows that the porosity of MIL-101(Cr) is well preserved ( Figure S5a ), which, along with monolith macropores, is beneficial for the incorporation of amines and fast diffusion of CO 2 . Fabrication and CO 2 Sorption Properties of PEI-Loaded CA/MIL-101(Cr) Monoliths In our prior work of preparing MIL-101(Cr)-supported amine sorbents for DAC, we observed good agreement between the experimental and theoretical pore volume of PEI-loaded MIL-101(Cr) using the density of branched PEI ( M w 800). 13 This finding suggests the effective insertion of PEI-800 inside the pores of MIL-101(Cr) using this procedure. A dual-solvent strategy was employed to maximize the driving force for infusion of poly(ethylenimine) (PEI) into the pores of MIL-101(Cr). 41 , 44 Hexane was selected as the nonpolar solvent, and methanol was used as the polar solvent to dilute PEI and load it into the MIL-101(Cr) powder. After PEI infusion, it is crucial to wash the CA/MIL-101(Cr) monoliths with methanol to remove excess PEI that blocks CO 2 diffusion pathways of CO 2 to the well-dispersed amine sites in the pores of MIL-101(Cr). TGA combustion experiments show that ∼14 mmol of N g MOF –1 PEI loading was achieved in a monolith using a typical dual-solvent recipe (32.7 g of hexane, 2.1 g of 33 wt % PEI solution in methanol), which is equivalent to 38.5 wt % PEI in PEI-loaded MIL-101(Cr) immobilized in the monolith. PXRD patterns of the PEI-loaded CA/MIL-101(Cr) monoliths, denoted as CA/MIL-101/PEI- X , where X indicates a N loading of X mmol per gram of MIL-101(Cr), suggest that MIL-101(Cr) particles in the monoliths remain crystalline. The reduced intensity of the diffraction peaks at 2θ values smaller than 7° is attributed to the scattering of unorganized PEI in the pores of MIL-101(Cr) ( Figure 2 d). Although CA has the potential for hydrolysis in basic and acidic solutions, attenuated total reflection infrared spectroscopy (ATR-IR) ( Figure S5b ) suggested that the PEI infusing process is a physical process not involving any chemical transformations (e.g., hydrolysis of CA). In addition, SEM shows that the CA framework maintained the same macroporous texture after PEI infusion ( Figure S5c ). These results together suggest that CA/MIL-101(Cr) monoliths have great stability under PEI loading conditions. The CO 2 uptakes of CA/MIL-101/PEI-14.5 at different pressures measured by the volumetric SAP system are shown in Figure 3 a. While 25 °C was selected as a representative temperature for ambient conditions, −20 °C was selected as the extreme cold temperature to magnify the temperature effects on DAC performance of amine-loaded MIL-101(Cr) adsorbents and to compare against our prior study. 12 At 400 ppm, the CO 2 uptakes in the CA/MIL-101/PEI monolith were 1.1 and 0.57 mmol of g monolith –1 at −20 and 25 °C, respectively. Assuming that PEI-loaded MIL-101(Cr) provides all the CO 2 sorption sites and the CA framework only serves as the support, these values correspond to 1.5 and 0.77 mmol g sorbent –1 at −20 and 25 °C, respectively, which correlates well with our previous work where MIL-101(Cr) powders with comparable PEI loading exhibited similar CO 2 uptakes under the same testing conditions ( Figure S6 ). 12 The same PEI loading method was repeated three times to provide consistent CO 2 uptakes at 400 ppm of CO 2 and −20 °C, suggesting good reproducibility of this PEI loading method. As the CO 2 adsorption heats of CA/MIL-101/PEI monoliths may vary at different temperatures, the commonly used method of measuring several CO 2 adsorption isotherms at different temperatures with the SAP system and estimating the adsorption heats based on the Clausius–Clapeyron equation may not apply. Therefore, the CO 2 isosteric heat at 25 °C was directly measured as −80 kJ mol –1 by integrating the measured heat flow during the CO 2 sorption experiment performed on TGA/DSC ( Figure S7 ). This value is between the low adsorption heat of MIL-101(Cr) with low amine content (e.g., 30 wt % TEPA) and the high adsorption heat of MIL-101(Cr) with high amine content (e.g., 50 wt %) found in prior work. 12 To study how the PEI loading affects the CO 2 uptake performance, several CA/MIL-101/PEI monoliths with varying amine loadings were prepared by varying the amount of PEI solution used in the PEI infusion step. In general, the amount of incorporated PEI in the monoliths is positively correlated to the amount of PEI used during the PEI infusion step. As shown in Figure 3 b, the CO 2 uptake capacity increased with the amine loading but plateaued when the amine loading was more than 20 mmol N g MOF –1 . The plateau in the CO 2 uptake might be due to pore blockage at high PEI loadings in MIL-101(Cr). Most monolith samples exhibit amine efficiencies (defined as the moles of sorbed CO 2 uptake normalized by the moles of amine sites) between 0.15 and 0.20, which are comparable to the amine efficiencies of PEI or TEPA impregnated MIL-101(Cr) powder sorbents. 12 Some monoliths with relatively low amine loadings (<12 mmol N g MOF –1 ) show amine efficiencies that are smaller than 0.12. As these monoliths were treated with small amounts of PEI solution during the PEI infusion process, the small PEI concentration gradients during this step may result in slow PEI diffusion and nonuniform PEI distribution in the monoliths, which eventually lead to poor amine efficiencies. Considering the good reproducibility of the PEI loading experiments, the dual-solvent amine loading method specified in the Experimental Methods section was subsequently used to modify large CA/MIL-101(Cr) monoliths for breakthrough experiments. Although it is convenient to measure CO 2 uptakes at different CO 2 partial pressures and temperatures with the SAP system, it does not provide information on CO 2 uptake kinetics. Therefore, dynamic uptake profiles ( Figure 4 ) using 400 ppm of CO 2 at −20 °C were collected by the TGA/DSC setup, which revealed comparable CO 2 uptake rates for two CA/MIL-101(Cr) monoliths with high (20.2 mmol N g MOF –1 ) and low (13.4 mmol N g MOF –1 ) PEI loading. Both monoliths reached pseudoequilibrium ( M / M ∞ = 0.95) in about 2 h, which is similar to the CO 2 uptake kinetics of PEI-impregnated MIL-101(Cr) in powder form. 12 This suggests that the CA frameworks have negligible effects on CO 2 diffusion under the subambient conditions used here, despite the presence of narrow pore sizes at the interfaces between the different layers of filaments. It is worth noting that a higher productivity can be achieved at the process scale with optimized durations of adsorption/desorption steps and higher flow rates. 45 However, more detailed experiments are needed to further support this supposition. Temperature-programmed desorption (TPD) experiments reveal that interactions of CO 2 with CA/MIL-101/PEI monoliths are dependent on the PEI loading in the monoliths. For CA/MIL-101/PEI-20.2, a bimodal CO 2 desorption profile with a peak desorption temperature of 52.3 °C was observed (inset of Figure 4 a). In comparison, a unimodal desorption profile with the peak desorption temperature at 26.4 °C was observed for CA/MIL-101/PEI-13.4 (inset of Figure 4 b). These results suggest that increased PEI loading provides more strong chemisorption sites for CO 2 interactions, highlighting the importance of optimizing the PEI loading in these monoliths for a balance of high CO 2 uptake and facile regeneration. Similar interaction mechanisms dependent on PEI loading have been reported in prior works on PEI-based adsorbents in powder form. 12 , 46 − 48 In addition, the CO 2 kinetics of CA/MIL-101/PEI pellets of different sizes, pellet-L with large size (2 × 4 × 3 mm 3 ) and pellet-S with small size (1 × 0.2 × 5 mm 3 ) ( Figure S8a ), were compared with the monoliths prepared by 3D printing. The detailed procedures to prepare these pellets are available in the Supporting Information . As shown in Figure S8b , the monolith and pellet-S reach a normalized CO 2 uptake capacity of 0.9 in 160 min, while pellet-L could reach a normalized CO 2 uptake capacity of only 0.66 in the same amount of time. The much faster CO 2 sorption kinetics of pellet-S and CA/MIL-101/PEI monoliths suggests the importance of controlling the CO 2 diffusion lengths in the composites of CA and MIL-101(Cr). Notably, the comparable CO 2 uptake kinetics in pellet-S and monoliths suggest that the nonuniform pore size distribution (due to repetitive filament deposition and solvent annealing during 3D printing, Figure S8c ) does not compromise the CO 2 diffusion rate or sorption uptake kinetics in the CA/MIL-101/PEI monoliths. This is likely because the average size of the population of these pores might still be too large to change the dominant mass transfer resistance in the monolith, which is the CO 2 diffusion in the MIL-101(Cr)-supported PEI. Due to the moderate CO 2 adsorption heat and CO 2 affinity of CA/MIL-101/PEI monoliths, less energy is required to desorb equivalent amounts of CO 2 compared to the case of high heats of adsorption that are found in many DAC sorbents. 12 A cyclic adsorption–desorption experiment was designed in the TGA/DSC for CA/MIL-101/PEI-14.5 to study its recyclability, with a 2 h CO 2 adsorption step at −20 °C and a 2 h desorption step at 60 °C. The average working capacity was about 0.95 mmol g monolith –1 over 14 cycles ( Figure 5 ). The decrease of about 0.15 mmol/g in the third cycle is attributed to an instrumental measurement error as the adsorption and desorption runs were performed continuously and automatically by the TGA/DSC setup. Although the consistent CO 2 working capacity suggests decent stability of CA/MIL-101/PEI-14.5 over this time frame and a possibility of sorbent regeneration at 60 °C, more detailed process studies will be required to verify the benefit of the low CO 2 heat of adsorption for DAC at low temperatures. Mechanical Strength and DAC Performance of CA/MIL-101/PEI Monoliths After SBAM methods for printing CA/MIL-101(Cr) monoliths were developed, 1.5 cm × 1.5 cm pieces of CA/MIL-101/PEI monoliths were fabricated ( Figure S9a ) for breakthrough experiments. There are two conceptual ways to prepare such monoliths with large dimensions, namely, a bottom-up method and a “slice and stack” method ( Scheme S3 ). The bottom-up method is relatively straightforward in terms of the printing process. However, the monoliths prepared by this method will deform as their height increases because the recently phase-separated monolith foundation does not have the mechanical strength to support the weight of the growing monolith. For the “slice and stack” method, there are two ways to slice monoliths into small parts. Horizontal slicing has been adopted in the literature; however, as a large number of monolith pieces are required to be stacked into a tall monolith, subsequent alignment of the monolith channels is challenging. In comparison, vertical slicing results in fewer monolith pieces, which is beneficial for obtaining straight channels with little resistance for gas flows. Additionally, it is more efficient to fabricate large monolith pieces for vertical slicing methods than to print many small pieces for horizontal slicing methods from the perspective of large-scale manufacturing via 3D printing. Therefore, the "vertically sliced" method was adopted to prepare CA/MIL-101/PEI monoliths, denoted as monolith-L, for mechanical testing and dynamic column experiments. The results of compression tests using monolith-L are shown in Figure S9b . Uniaxial force was applied in the z direction of the monoliths. Interestingly, the stress gradually increased when the strain was less than 0.15 but increased rapidly for the higher strain region. Consistent results were observed for two monolith samples. Similar mechanical responses for the monoliths have also been reported for 3D-printed zeolite monoliths. 27 Further inspections of the monolith sample after the mechanical test showed that deformation and delamination mainly occurred in “ridges” of the monolith samples (the channels of the bed when monolith-L are packed together; Figure S9c ). In comparison, the “base” of the monolith only underwent slight compression deformation. This is because the ridges have much smaller cross-sectional areas (less than 60% of the base) and hence much greater stress compared to that applied to the base. The base of the monoliths did not break at the maximum loading of the testing device (∼2K N), suggesting decent compressive strength for the monolith-L prepared by SBAM. We envision that further optimization of 3D printing parameters and monolithic structures can improve the adhesion of different 3D-printed layers and enhance the overall mechanical stability. Monolith-L was packed in a homemade stainless-steel housing ( Figure S10a ) for dynamic breakthrough experiments using 400 ppm of CO 2 under dry conditions. As shown in Figure 6 a, when the flow rate of the feed gas was 200 sccm at −20 °C, CO 2 broke through the column almost instantly, possibly due to a CO 2 bypass through the straight channels of the monoliths. The pseudoequilibrium CO 2 uptake capacity of these monoliths was 1.05 mmol g monolith –1 , which is consistent with the results of the gravimetric and volumetric CO 2 uptake measurements. The volumetric CO 2 uptake of the bed made of monolith-L is calculated to be 0.244 mmol cm –3 by using the total bed volume, including the channels, to calculate the apparent monolith density. This value can be further enhanced by increasing the density of 3D-printed filaments in monoliths and reducing the channel size. When the flow rate was reduced to 100 and 50 sccm, the pseudoequilibrium CO 2 uptake capacities were almost unchanged, as shown in Figure 6 b, but the CO 2 breakthrough uptake capacities of the bed (i.e., the CO 2 uptake capacity when the normalized CO 2 concentration reached 0.05) increased to 0.53 and 0.60 mmol g monolith –1 for gas flow rates of 100 and 50 sccm, respectively. This is likely because the longer residence time in the bed allows more CO 2 to be captured before breaking through the bed. Following the dry 400 ppm of CO 2 breakthrough experiments, a TPD experiment was performed while purging the monoliths with a constant 100 sccm N 2 flow. A desorption peak temperature of 40 °C was observed ( Figure S10b ), which is slightly higher than the desorption peak temperatures for testing CA/MIL-101/PEI monoliths with comparable PEI loading ( Figure 4 b). This difference is possibly due to the external heating of the packed bed in breakthrough experiments which cannot achieve fast and uniform heating of the monoliths as in the case of TPD experiments using much smaller amount of samples performed in the TGA/DSC system. Breakthrough experiments using 400 ppm CO 2 were also repeated at higher temperatures. As shown in Figure 6 c, the CO 2 uptake capacities of the monoliths decrease as temperature increases, although the monolith was found to still adsorb 0.60 mmol g monolith –1 CO 2 at 25 °C under dry conditions, comparable to other shaped DAC sorbents. 18 These results highlight the potential versatility of CA/MIL-101/PEI monoliths for DAC under different climate environments. The effect of humidity on direct air CO 2 capture at −20 °C was also explored by presaturating monolith-L with 80% RH moisture in N 2 followed by the introduction of wet (80% RH) 400 ppm of CO 2 in N 2 . As the water partial pressure used in the breakthrough experiments (0.8 mbar) was lower than the sublimation vapor pressure of water (0.99 mbar) at −20 °C, 49 water vapor should not condense in the column, but it might condense/freeze in the pores of the MIL-101(Cr) and perhaps in the pores of the polymer support and affect the kinetics of CO 2 adsorption. As shown in Figure S11 , moisture broke through the column instantly with a gradually increasing moisture signal at the column exit throughout the presaturation experiment, which suggests sluggish moisture uptake kinetics of the monoliths at −20 °C. The water uptake calculated from the breakthrough experiment is about 21.0 mmol g monolith –1 , which is slightly higher than the water uptake measured gravimetrically at 78% RH at 25 °C ( Figure S12 ). Interestingly, an instantaneous CO 2 breakthrough from the column was also observed in the wet CO 2 breakthrough experiment. In contrast to the CO 2 breakthrough curves under dry conditions, the significantly broader breakthrough curve of wet CO 2 suggests a noticeable decline of CO 2 uptake kinetics ( Figure 7 a), which is possibly due to the additional mass transfer resistance originating from preadsorbed water molecules around amine sites or perhaps in the mesopores/macropores of the CA polymer matrix. Despite the decreased CO 2 uptake kinetics, the CO 2 uptake capacity increased by about 36% from 1.05 mmol g monolith –1 under dry conditions to 1.43 mmol g monolith –1 in the wet monoliths. This enhanced CO 2 uptake under humid conditions is attributed to the improved chain mobility of PEI molecules due to the plasticizing effects of water 12 and/or enabling CO 2 sorption as bicarbonate. 14 , 50 , 51 While some DAC processes will likely operate with constantly hydrated sorbents (e.g., DAC in humid regions using direct contact steam-stripping desorption), in some DAC processes, humid air will be fed into a dry or partially dry DAC bed. As it takes much longer for the bed to saturate with water due to the slow water sorption kinetics, the water uptake in the bed at the end of the CO 2 adsorption step should be much lower than the maximum uptake capacity in these latter types of DAC processes. To mimic this specific scenario, wet 400 ppm of CO 2 with 80% RH moisture was directly introduced to the dry bed of CA/MIL-101/PEI monoliths without presaturation of the bed. Interestingly, the mean residence time of CO 2 became longer ( Figure 7 b), and the pseudoequilibrium of the CO 2 uptake of monolith-L was increased by 22% compared to the dry condition. Meanwhile, the coadsorbed water capacity was 7.0 mmol g monolith –1 , which is about 1/3 of the water uptake capacity (21.0 mmol g monolith –1 ) determined by the water presaturation breakthrough curve ( Figure S11 ). This observation suggests that optimizing the duration of the adsorption step could potentially benefit the overall CO 2 working capacity of the process without paying the substantial energy penalty to remove excessive coadsorbed water molecules.
Results and Discussion Fabrication of Zeolite 13X Monoliths via SBAM Cellulose acetate (CA) was selected as the polymer component of the SBAM printing ink for several reasons. First, CA is readily available and affordable for the potential mass production of sorbent monoliths. Second, the abundant polar functional groups of CA can strengthen the interactions between CA and sorbent particles with polar surfaces, which can hinder loss of sorbent particles from the printed monoliths in postprinting modification steps such as solvent exchange and amine impregnation. Third, solution systems of CA containing solvents and nonsolvents have been extensively reported in the literature for the preparation of CA membranes. The existence of detailed phase diagrams for these systems assists rapid screening and identification of suitable compositions of ternary inks for SBAM. After the polymer component is identified, it is important to select suitable solvents for the SBAM ink, as solvent volatility is critical for controlling the phase separation speed and corresponding textural properties of the printed monoliths. N , N -Dimethylacetamide (DMAc) and water were first selected as the solvent and nonsolvent for CA, respectively. The cloud point technique was employed to determine the binodal line of the CA/DMAc/H 2 O ternary system ( Figure S1 ). Although a room-temperature homogeneous ink in the vicinity of the binodal line was successfully identified, it was incapable of rapid phase inversion (i.e., solidification within 1 min after air exposure), which can be attributed to the slow evaporation of the relatively nonvolatile DMAc. Therefore, acetone was selected as a cosolvent, along with DMAc. Because acetone is highly volatile, its evaporation is expected to quickly shift the ink composition away from the solvent pole in the phase diagram and trigger phase inversion after the composition crosses the binodal line. As shown in Figure S1 , changing pure DMAc to the mixture of DMAc and acetone (1:1 mass ratio) shifts the position of binodal line to the right and allows for higher nonsolvent (H 2 O) content in the ink. We hypothesize that this leads to greater porosity in the printed structures. 36 Zeolite 13X was selected as the model adsorbent for incorporation into CA/DMAc/acetone/H 2 O dopes for monolith preparation by SBAM. The CA content was fixed at 15 wt % (excluding zeolite 13X) to achieve good ink fluidity and viscosity; the detailed procedures to prepare the SBAM inks containing zeolite 13X are available in Section 3 of the Supporting Information . The SBAM inks remained homogeneous for at least 1 week after they were prepared. However, these inks did stratify after long-time settling (∼6 months), which is likely because the gradual aggregation of the adsorbent particles accelerates the settling. A customized 3D printer, as illustrated in Figure 1 a, was built to deposit polymer filaments containing sorbents. The dope deposition rate was controlled by the N 2 pressure in the ink cartridge headspace. The print speeds and layer heights were controlled to allow for good adhesion between different layers. CA/13X monoliths were successfully prepared with excellent fidelity compared to the designed structures ( Figure 1 b). At 50 wt % zeolite 13X loading, the printed monolith had excellent adhesion between different layers of filaments ( Figure 1 c), and zeolite 13X crystals were randomly distributed in the hierarchical porous CA matrix because of spinodal decomposition ( Figure 1 d). However, when zeolite 13X loading increased beyond 60 wt %, the obtained monoliths exhibited dense structures with low porosity ( Figure S2a ). A possible reason for this could be solvent adsorption in the zeolite 13X particles during ink preparation, which would result in phase inversion of the ink before deposition on the printing platform. To prevent undesired preprinting phase separation of the polymer ink due to addition of zeolite 13X, dry zeolite 13X was presaturated with mixed solvent vapor that was in equilibrium with the mixed solvents used for dope preparation. The vapor-loaded zeolite 13X was subsequently used for printing a monolith (denoted as CA/13X) containing 60 wt % zeolite 13X. This monolith exhibited significantly improved porosity ( Figure S2b ) compared to monoliths prepared without the presaturation steps. The loading of zeolite 13X can be further increased to 70 wt % without compromising the printing quality of CA/13X monoliths ( Figure S2c ). Interestingly, no significant differences in BET surface areas and pore volumes were observed between two monolith samples prepared with different zeolite 13X samples (presaturated or not) after taking the different zeolite 13X loadings into consideration ( Figure S2d ). These results suggest that the presaturation step mainly affects the macroporosity of the monoliths. Measurement of the CO 2 uptake kinetics of CA/13X monoliths containing 65 wt % zeolite 13X by thermogravimetric analysis (TGA) reveals rapid CO 2 uptake kinetics ( Figure S3a ) and a CO 2 capacity of 2.6 mmol g monolith –1 (4.0 mmol g zeolite –1 at 50 kPa and 30 °C), which are comparable to the powder zeolite 13X. 42 These results suggest that incorporation of zeolite into the monolith structures has negligible adverse effects on the CO 2 uptake properties of zeolite 13X. Interestingly, an uneven pore size distribution within the printed structure was observed. For example, the bottoms of the filaments from the upper printed layers are typically more porous than the upper surfaces of the filaments from the lower layers ( Figure 1 e). Such heterogeneity in pore sizes persists regardless of our efforts to optimize ink compositions. On the other hand, individual filaments extruded by a syringe using the same ink possess evenly distributed pore sizes ( Figure S3b ). To reconcile these two observations, we speculate that the uneven distribution of pore sizes is related to the 3D printing process. Solvents from newly deposited filaments serve as annealing agents and reduce the surface pore sizes of the filaments from the lower layers. Fabrication of MIL-101(Cr) Monoliths via SBAM Monoliths composed of CA and MIL-101(Cr) crystals [denoted as CA/MIL-101(Cr)] were successfully prepared by SBAM using ink formulations, sorbent presaturation, and 3D printing parameters similar to those for preparing CA/13X monoliths. MIL-101(Cr) powder was first synthesized based on a previously reported large-scale production method. 40 Powder X-ray diffraction (PXRD) patterns reveal the phase purity of the activated MIL-101(Cr) product ( Figure S4a ). The particle sizes of MIL-101(Cr) crystals were less than 1 μm ( Figure S4b ), which is beneficial for preparing a well-mixed polymer ink containing MIL-101(Cr) for 3D printing. The N 2 sorption isotherm at −195.8 °C reveals a BET surface area of 3057 m 2 g –1 and a pore volume of 1.58 cm 3 g –1 of MIL-101(Cr). Because MIL-101(Cr) has a much higher porosity than zeolite 13X (0.34 cm 3 g –1 ), 43 the same weight content of MIL-101(Cr) in the ink will consume larger solvent volume fractions compared to 13X, leading to much greater ink viscosity. Therefore, CA concentrations were adjusted to 12.5 wt % (excluding the mass of sorbent) to achieve a printable viscosity for the ink containing MIL-101(Cr). Furthermore, to minimize potential negative effects of solvent annealing on CO 2 uptake kinetics, the width of the channel walls and monolith walls was set to the width of a single filament, so that each deposited filament will be minimally affected by annealing solvent vapor from peripheral filaments. As shown in Figure 2 a, the shapes of CA/MIL-101(Cr) monolith channels were well-defined, and the channel number per square inch (CPSI) can be as high as 644 in. –2 . TGA suggests the MIL-101(Cr) loading in the monolith was 62 wt % ( Figure S4c ). No dense skin layers were observed on the monolith surface ( Figure 2 b), and MIL-101(Cr) crystals are evenly distributed in the macroporous CA networks without aggregation ( Figure 2 c). PXRD patterns of CA/MIL-101(Cr) exhibit characteristic diffraction reflections for MIL-101(Cr), suggesting that MIL-101(Cr) remains crystalline after SBAM and subsequent solvent exchange processes ( Figure 2 d). The BET surface area of the monolith was calculated to be 1569 m 2 g –1 based on its N 2 adsorption isotherm at −195.8 °C ( Figure 2 e), which corresponds well with the surface area of MIL-101(Cr) and 62% loading. Pore size distribution analysis shows that the porosity of MIL-101(Cr) is well preserved ( Figure S5a ), which, along with monolith macropores, is beneficial for the incorporation of amines and fast diffusion of CO 2 . Fabrication and CO 2 Sorption Properties of PEI-Loaded CA/MIL-101(Cr) Monoliths In our prior work of preparing MIL-101(Cr)-supported amine sorbents for DAC, we observed good agreement between the experimental and theoretical pore volume of PEI-loaded MIL-101(Cr) using the density of branched PEI ( M w 800). 13 This finding suggests the effective insertion of PEI-800 inside the pores of MIL-101(Cr) using this procedure. A dual-solvent strategy was employed to maximize the driving force for infusion of poly(ethylenimine) (PEI) into the pores of MIL-101(Cr). 41 , 44 Hexane was selected as the nonpolar solvent, and methanol was used as the polar solvent to dilute PEI and load it into the MIL-101(Cr) powder. After PEI infusion, it is crucial to wash the CA/MIL-101(Cr) monoliths with methanol to remove excess PEI that blocks CO 2 diffusion pathways of CO 2 to the well-dispersed amine sites in the pores of MIL-101(Cr). TGA combustion experiments show that ∼14 mmol of N g MOF –1 PEI loading was achieved in a monolith using a typical dual-solvent recipe (32.7 g of hexane, 2.1 g of 33 wt % PEI solution in methanol), which is equivalent to 38.5 wt % PEI in PEI-loaded MIL-101(Cr) immobilized in the monolith. PXRD patterns of the PEI-loaded CA/MIL-101(Cr) monoliths, denoted as CA/MIL-101/PEI- X , where X indicates a N loading of X mmol per gram of MIL-101(Cr), suggest that MIL-101(Cr) particles in the monoliths remain crystalline. The reduced intensity of the diffraction peaks at 2θ values smaller than 7° is attributed to the scattering of unorganized PEI in the pores of MIL-101(Cr) ( Figure 2 d). Although CA has the potential for hydrolysis in basic and acidic solutions, attenuated total reflection infrared spectroscopy (ATR-IR) ( Figure S5b ) suggested that the PEI infusing process is a physical process not involving any chemical transformations (e.g., hydrolysis of CA). In addition, SEM shows that the CA framework maintained the same macroporous texture after PEI infusion ( Figure S5c ). These results together suggest that CA/MIL-101(Cr) monoliths have great stability under PEI loading conditions. The CO 2 uptakes of CA/MIL-101/PEI-14.5 at different pressures measured by the volumetric SAP system are shown in Figure 3 a. While 25 °C was selected as a representative temperature for ambient conditions, −20 °C was selected as the extreme cold temperature to magnify the temperature effects on DAC performance of amine-loaded MIL-101(Cr) adsorbents and to compare against our prior study. 12 At 400 ppm, the CO 2 uptakes in the CA/MIL-101/PEI monolith were 1.1 and 0.57 mmol of g monolith –1 at −20 and 25 °C, respectively. Assuming that PEI-loaded MIL-101(Cr) provides all the CO 2 sorption sites and the CA framework only serves as the support, these values correspond to 1.5 and 0.77 mmol g sorbent –1 at −20 and 25 °C, respectively, which correlates well with our previous work where MIL-101(Cr) powders with comparable PEI loading exhibited similar CO 2 uptakes under the same testing conditions ( Figure S6 ). 12 The same PEI loading method was repeated three times to provide consistent CO 2 uptakes at 400 ppm of CO 2 and −20 °C, suggesting good reproducibility of this PEI loading method. As the CO 2 adsorption heats of CA/MIL-101/PEI monoliths may vary at different temperatures, the commonly used method of measuring several CO 2 adsorption isotherms at different temperatures with the SAP system and estimating the adsorption heats based on the Clausius–Clapeyron equation may not apply. Therefore, the CO 2 isosteric heat at 25 °C was directly measured as −80 kJ mol –1 by integrating the measured heat flow during the CO 2 sorption experiment performed on TGA/DSC ( Figure S7 ). This value is between the low adsorption heat of MIL-101(Cr) with low amine content (e.g., 30 wt % TEPA) and the high adsorption heat of MIL-101(Cr) with high amine content (e.g., 50 wt %) found in prior work. 12 To study how the PEI loading affects the CO 2 uptake performance, several CA/MIL-101/PEI monoliths with varying amine loadings were prepared by varying the amount of PEI solution used in the PEI infusion step. In general, the amount of incorporated PEI in the monoliths is positively correlated to the amount of PEI used during the PEI infusion step. As shown in Figure 3 b, the CO 2 uptake capacity increased with the amine loading but plateaued when the amine loading was more than 20 mmol N g MOF –1 . The plateau in the CO 2 uptake might be due to pore blockage at high PEI loadings in MIL-101(Cr). Most monolith samples exhibit amine efficiencies (defined as the moles of sorbed CO 2 uptake normalized by the moles of amine sites) between 0.15 and 0.20, which are comparable to the amine efficiencies of PEI or TEPA impregnated MIL-101(Cr) powder sorbents. 12 Some monoliths with relatively low amine loadings (<12 mmol N g MOF –1 ) show amine efficiencies that are smaller than 0.12. As these monoliths were treated with small amounts of PEI solution during the PEI infusion process, the small PEI concentration gradients during this step may result in slow PEI diffusion and nonuniform PEI distribution in the monoliths, which eventually lead to poor amine efficiencies. Considering the good reproducibility of the PEI loading experiments, the dual-solvent amine loading method specified in the Experimental Methods section was subsequently used to modify large CA/MIL-101(Cr) monoliths for breakthrough experiments. Although it is convenient to measure CO 2 uptakes at different CO 2 partial pressures and temperatures with the SAP system, it does not provide information on CO 2 uptake kinetics. Therefore, dynamic uptake profiles ( Figure 4 ) using 400 ppm of CO 2 at −20 °C were collected by the TGA/DSC setup, which revealed comparable CO 2 uptake rates for two CA/MIL-101(Cr) monoliths with high (20.2 mmol N g MOF –1 ) and low (13.4 mmol N g MOF –1 ) PEI loading. Both monoliths reached pseudoequilibrium ( M / M ∞ = 0.95) in about 2 h, which is similar to the CO 2 uptake kinetics of PEI-impregnated MIL-101(Cr) in powder form. 12 This suggests that the CA frameworks have negligible effects on CO 2 diffusion under the subambient conditions used here, despite the presence of narrow pore sizes at the interfaces between the different layers of filaments. It is worth noting that a higher productivity can be achieved at the process scale with optimized durations of adsorption/desorption steps and higher flow rates. 45 However, more detailed experiments are needed to further support this supposition. Temperature-programmed desorption (TPD) experiments reveal that interactions of CO 2 with CA/MIL-101/PEI monoliths are dependent on the PEI loading in the monoliths. For CA/MIL-101/PEI-20.2, a bimodal CO 2 desorption profile with a peak desorption temperature of 52.3 °C was observed (inset of Figure 4 a). In comparison, a unimodal desorption profile with the peak desorption temperature at 26.4 °C was observed for CA/MIL-101/PEI-13.4 (inset of Figure 4 b). These results suggest that increased PEI loading provides more strong chemisorption sites for CO 2 interactions, highlighting the importance of optimizing the PEI loading in these monoliths for a balance of high CO 2 uptake and facile regeneration. Similar interaction mechanisms dependent on PEI loading have been reported in prior works on PEI-based adsorbents in powder form. 12 , 46 − 48 In addition, the CO 2 kinetics of CA/MIL-101/PEI pellets of different sizes, pellet-L with large size (2 × 4 × 3 mm 3 ) and pellet-S with small size (1 × 0.2 × 5 mm 3 ) ( Figure S8a ), were compared with the monoliths prepared by 3D printing. The detailed procedures to prepare these pellets are available in the Supporting Information . As shown in Figure S8b , the monolith and pellet-S reach a normalized CO 2 uptake capacity of 0.9 in 160 min, while pellet-L could reach a normalized CO 2 uptake capacity of only 0.66 in the same amount of time. The much faster CO 2 sorption kinetics of pellet-S and CA/MIL-101/PEI monoliths suggests the importance of controlling the CO 2 diffusion lengths in the composites of CA and MIL-101(Cr). Notably, the comparable CO 2 uptake kinetics in pellet-S and monoliths suggest that the nonuniform pore size distribution (due to repetitive filament deposition and solvent annealing during 3D printing, Figure S8c ) does not compromise the CO 2 diffusion rate or sorption uptake kinetics in the CA/MIL-101/PEI monoliths. This is likely because the average size of the population of these pores might still be too large to change the dominant mass transfer resistance in the monolith, which is the CO 2 diffusion in the MIL-101(Cr)-supported PEI. Due to the moderate CO 2 adsorption heat and CO 2 affinity of CA/MIL-101/PEI monoliths, less energy is required to desorb equivalent amounts of CO 2 compared to the case of high heats of adsorption that are found in many DAC sorbents. 12 A cyclic adsorption–desorption experiment was designed in the TGA/DSC for CA/MIL-101/PEI-14.5 to study its recyclability, with a 2 h CO 2 adsorption step at −20 °C and a 2 h desorption step at 60 °C. The average working capacity was about 0.95 mmol g monolith –1 over 14 cycles ( Figure 5 ). The decrease of about 0.15 mmol/g in the third cycle is attributed to an instrumental measurement error as the adsorption and desorption runs were performed continuously and automatically by the TGA/DSC setup. Although the consistent CO 2 working capacity suggests decent stability of CA/MIL-101/PEI-14.5 over this time frame and a possibility of sorbent regeneration at 60 °C, more detailed process studies will be required to verify the benefit of the low CO 2 heat of adsorption for DAC at low temperatures. Mechanical Strength and DAC Performance of CA/MIL-101/PEI Monoliths After SBAM methods for printing CA/MIL-101(Cr) monoliths were developed, 1.5 cm × 1.5 cm pieces of CA/MIL-101/PEI monoliths were fabricated ( Figure S9a ) for breakthrough experiments. There are two conceptual ways to prepare such monoliths with large dimensions, namely, a bottom-up method and a “slice and stack” method ( Scheme S3 ). The bottom-up method is relatively straightforward in terms of the printing process. However, the monoliths prepared by this method will deform as their height increases because the recently phase-separated monolith foundation does not have the mechanical strength to support the weight of the growing monolith. For the “slice and stack” method, there are two ways to slice monoliths into small parts. Horizontal slicing has been adopted in the literature; however, as a large number of monolith pieces are required to be stacked into a tall monolith, subsequent alignment of the monolith channels is challenging. In comparison, vertical slicing results in fewer monolith pieces, which is beneficial for obtaining straight channels with little resistance for gas flows. Additionally, it is more efficient to fabricate large monolith pieces for vertical slicing methods than to print many small pieces for horizontal slicing methods from the perspective of large-scale manufacturing via 3D printing. Therefore, the "vertically sliced" method was adopted to prepare CA/MIL-101/PEI monoliths, denoted as monolith-L, for mechanical testing and dynamic column experiments. The results of compression tests using monolith-L are shown in Figure S9b . Uniaxial force was applied in the z direction of the monoliths. Interestingly, the stress gradually increased when the strain was less than 0.15 but increased rapidly for the higher strain region. Consistent results were observed for two monolith samples. Similar mechanical responses for the monoliths have also been reported for 3D-printed zeolite monoliths. 27 Further inspections of the monolith sample after the mechanical test showed that deformation and delamination mainly occurred in “ridges” of the monolith samples (the channels of the bed when monolith-L are packed together; Figure S9c ). In comparison, the “base” of the monolith only underwent slight compression deformation. This is because the ridges have much smaller cross-sectional areas (less than 60% of the base) and hence much greater stress compared to that applied to the base. The base of the monoliths did not break at the maximum loading of the testing device (∼2K N), suggesting decent compressive strength for the monolith-L prepared by SBAM. We envision that further optimization of 3D printing parameters and monolithic structures can improve the adhesion of different 3D-printed layers and enhance the overall mechanical stability. Monolith-L was packed in a homemade stainless-steel housing ( Figure S10a ) for dynamic breakthrough experiments using 400 ppm of CO 2 under dry conditions. As shown in Figure 6 a, when the flow rate of the feed gas was 200 sccm at −20 °C, CO 2 broke through the column almost instantly, possibly due to a CO 2 bypass through the straight channels of the monoliths. The pseudoequilibrium CO 2 uptake capacity of these monoliths was 1.05 mmol g monolith –1 , which is consistent with the results of the gravimetric and volumetric CO 2 uptake measurements. The volumetric CO 2 uptake of the bed made of monolith-L is calculated to be 0.244 mmol cm –3 by using the total bed volume, including the channels, to calculate the apparent monolith density. This value can be further enhanced by increasing the density of 3D-printed filaments in monoliths and reducing the channel size. When the flow rate was reduced to 100 and 50 sccm, the pseudoequilibrium CO 2 uptake capacities were almost unchanged, as shown in Figure 6 b, but the CO 2 breakthrough uptake capacities of the bed (i.e., the CO 2 uptake capacity when the normalized CO 2 concentration reached 0.05) increased to 0.53 and 0.60 mmol g monolith –1 for gas flow rates of 100 and 50 sccm, respectively. This is likely because the longer residence time in the bed allows more CO 2 to be captured before breaking through the bed. Following the dry 400 ppm of CO 2 breakthrough experiments, a TPD experiment was performed while purging the monoliths with a constant 100 sccm N 2 flow. A desorption peak temperature of 40 °C was observed ( Figure S10b ), which is slightly higher than the desorption peak temperatures for testing CA/MIL-101/PEI monoliths with comparable PEI loading ( Figure 4 b). This difference is possibly due to the external heating of the packed bed in breakthrough experiments which cannot achieve fast and uniform heating of the monoliths as in the case of TPD experiments using much smaller amount of samples performed in the TGA/DSC system. Breakthrough experiments using 400 ppm CO 2 were also repeated at higher temperatures. As shown in Figure 6 c, the CO 2 uptake capacities of the monoliths decrease as temperature increases, although the monolith was found to still adsorb 0.60 mmol g monolith –1 CO 2 at 25 °C under dry conditions, comparable to other shaped DAC sorbents. 18 These results highlight the potential versatility of CA/MIL-101/PEI monoliths for DAC under different climate environments. The effect of humidity on direct air CO 2 capture at −20 °C was also explored by presaturating monolith-L with 80% RH moisture in N 2 followed by the introduction of wet (80% RH) 400 ppm of CO 2 in N 2 . As the water partial pressure used in the breakthrough experiments (0.8 mbar) was lower than the sublimation vapor pressure of water (0.99 mbar) at −20 °C, 49 water vapor should not condense in the column, but it might condense/freeze in the pores of the MIL-101(Cr) and perhaps in the pores of the polymer support and affect the kinetics of CO 2 adsorption. As shown in Figure S11 , moisture broke through the column instantly with a gradually increasing moisture signal at the column exit throughout the presaturation experiment, which suggests sluggish moisture uptake kinetics of the monoliths at −20 °C. The water uptake calculated from the breakthrough experiment is about 21.0 mmol g monolith –1 , which is slightly higher than the water uptake measured gravimetrically at 78% RH at 25 °C ( Figure S12 ). Interestingly, an instantaneous CO 2 breakthrough from the column was also observed in the wet CO 2 breakthrough experiment. In contrast to the CO 2 breakthrough curves under dry conditions, the significantly broader breakthrough curve of wet CO 2 suggests a noticeable decline of CO 2 uptake kinetics ( Figure 7 a), which is possibly due to the additional mass transfer resistance originating from preadsorbed water molecules around amine sites or perhaps in the mesopores/macropores of the CA polymer matrix. Despite the decreased CO 2 uptake kinetics, the CO 2 uptake capacity increased by about 36% from 1.05 mmol g monolith –1 under dry conditions to 1.43 mmol g monolith –1 in the wet monoliths. This enhanced CO 2 uptake under humid conditions is attributed to the improved chain mobility of PEI molecules due to the plasticizing effects of water 12 and/or enabling CO 2 sorption as bicarbonate. 14 , 50 , 51 While some DAC processes will likely operate with constantly hydrated sorbents (e.g., DAC in humid regions using direct contact steam-stripping desorption), in some DAC processes, humid air will be fed into a dry or partially dry DAC bed. As it takes much longer for the bed to saturate with water due to the slow water sorption kinetics, the water uptake in the bed at the end of the CO 2 adsorption step should be much lower than the maximum uptake capacity in these latter types of DAC processes. To mimic this specific scenario, wet 400 ppm of CO 2 with 80% RH moisture was directly introduced to the dry bed of CA/MIL-101/PEI monoliths without presaturation of the bed. Interestingly, the mean residence time of CO 2 became longer ( Figure 7 b), and the pseudoequilibrium of the CO 2 uptake of monolith-L was increased by 22% compared to the dry condition. Meanwhile, the coadsorbed water capacity was 7.0 mmol g monolith –1 , which is about 1/3 of the water uptake capacity (21.0 mmol g monolith –1 ) determined by the water presaturation breakthrough curve ( Figure S11 ). This observation suggests that optimizing the duration of the adsorption step could potentially benefit the overall CO 2 working capacity of the process without paying the substantial energy penalty to remove excessive coadsorbed water molecules.
Conclusions In summary, this work demonstrates the utilization of SBAM to fabricate polymer/sorbent composite monoliths with hierarchical porosity and high sorbent content (up to 70 wt %). This work suggests that SBAM can be a useful tool to fabricate contactors of sorbent particles with various pore volumes and chemical compositions, such that the resulting contactors could be potentially used for different chemical separation problems. As a demonstration, CA monoliths containing zeolite 13X or MIL-101(Cr) were successfully fabricated and characterized. PEI was successfully loaded into the CA/MIL-101(Cr) monoliths to fabricate DAC contactors that were then evaluated under both ambient and cold temperature operating conditions. Integration of PEI-loaded MIL-101(Cr) sorbents into the macroporous CA network does not compromise their DAC properties when compared to the powder form, and an average CO 2 working capacity of 0.94 mmol g monolith –1 was observed when CO 2 was adsorbed at −20 °C and desorbed at 60 °C. Under dry conditions, dynamic breakthrough experiments at −20 °C showed a pseudoequilibrium CO 2 uptake capacity of 1.05 mmol g monolith –1 , which is consistent with single-component CO 2 sorption results. Presaturating the bed with 70% RH moisture boosted the pseudoequilibrium CO 2 uptake to 1.43 mmol g monolith –1 at −20 °C, which is attributed to the plasticizing effects of moisture or the formation of bicarbonate species. Combined with 400 ppm of CO 2 breakthrough experiments performed at temperatures below 15 °C, this study suggests the potential applicability of CA/MIL-101/monoliths in DAC under subambient conditions. Key limitations of this work include the moderate MIL-101(Cr) loading (< 70 wt %) in the monoliths and the relatively simple monolith structures achieved so far. It would be desirable to harness the printing versatility of SBAM to fabricate monoliths with more complex geometries and compare their performance. Preliminary results show that SBAM could be employed to fabricate monoliths with gyroid channels (monolith-G, Figure S13a ) using gyroid as the infill pattern and 25% as the infill density in Cura. In comparison to monolith-L, the bed made of monolith-G showed breakthrough curves of 400 ppm of CO 2 under dry conditions at −20 °C with similar pseudoequilibrium CO 2 uptake capacities and mass transfer performance ( Figure S13b ), suggesting that SBAM has versatility in preparing monoliths with complex structures without compromising the CO 2 uptake capacities and kinetics. It should be noted that the linear gas velocities used in these preliminary comparison breakthrough experiments are lower than 1.5 cm s –1 , which are far from the practical air velocities desirable for DAC processes. Higher gas flow rates in the breakthrough experiments and other forms of gyroid channels might be required to distinguish the differences between the mass/heat transfer dynamics of monoliths with gyroid structures and simple straight-channel structures. In addition, although SBAM and other 3D printing techniques exhibit prospects of affording adsorbent monoliths with complex, unconventional structures, it is challenging to manufacture these monoliths at the scales and speeds needed for DAC. Further study in the mechanical engineering (e.g., 3D printer customizations) and materials engineering (e.g., ink formulation engineering) are suggested for making 3D printing viable for practical manufacturing of DAC contactors. Finally, although a milder regeneration temperature can be used for low-temperature DAC sorbents based on the consistent CO 2 working capacity obtained in the cyclic experiments, detailed process studies are required to identify optimized desorption conditions for low-temperature DAC sorbents and compare the energy consumption of DAC processes deployed under different climate conditions.
Zeolites, silica-supported amines, and metal–organic frameworks (MOFs) have been demonstrated as promising adsorbents for direct air CO 2 capture (DAC), but the shaping and structuring of these materials into sorbent modules for practical processes have been inadequately investigated compared to the extensive research on powder materials. Furthermore, there have been relatively few studies reporting the DAC performance of sorbent contactors under cold, subambient conditions (temperatures below 20 °C). In this work, we demonstrate the successful fabrication of adsorbent monoliths composed of cellulose acetate (CA) and adsorbent particles such as zeolite 13X and MOF MIL-101(Cr) by a 3D printing technique: solution-based additive manufacturing (SBAM). These monoliths feature interpenetrated macroporous polymeric frameworks in which microcrystals of zeolite 13X or MIL-101(Cr) are evenly distributed, highlighting the versatility of SBAM in fabricating monoliths containing sorbents with different particle sizes and density. Branched poly(ethylenimine) (PEI) is successfully loaded into the CA/MIL-101(Cr) monoliths to impart CO 2 uptakes of 1.05 mmol g monolith –1 at −20 °C and 400 ppm of CO 2 . Kinetic analysis shows that the CO 2 sorption kinetics of PEI-loaded MIL-101(Cr) sorbents are not compromised in the monoliths compared to the powder sorbents. Importantly, these monoliths exhibit promising working capacities (0.95 mmol g monolith –1 ) over 14 temperature swing cycles with a moderate regeneration temperature of 60 °C. Dynamic breakthrough experiments at 25 °C under dry conditions reveal a CO 2 uptake capacity of 0.60 mmol g monolith –1 , which further increases to 1.05 and 1.43 mmol g monolith –1 at −20 °C under dry and humid (70% relative humidity) conditions, respectively. Our work showcases the successful implementation of SBAM in making DAC sorbent monoliths with notable CO 2 capture performance over a wide range of sorption temperatures, suggesting that SBAM can enable the preparation of efficient sorbent contactors in various form factors for other important chemical separations.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c13528 . Schematic illustrations for experimental setups, phase diagram of SBAM ink, characterization data (SEM, TGA, XRD, ATR-IR, pore size distribution), CO 2 adsorption behavior (TGA/DSC data, adsorption isotherms, CO 2 -TPD data, and breakthrough experiments) of the CA/MIL-101/PEI monoliths ( PDF ) Supplementary Material The authors declare the following competing financial interest(s): C.W.J. has a financial interest in several companies that seeks to commercialize CO2 capture from air. This work is not affiliated with such companies. C.W.J. has a conflict-of-interest management plan in place at Georgia Tech. Acknowledgments This research was supported by the National Energy Technology Laboratory of the U.S. Department of Energy under Award DE-FE-FE0031952 and Zero Carbon Partners, LLC. This work was performed in part at the Georgia Tech Institute for Electronics and Nanotechnology, a member of the National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the National Science Foundation (ECCS-2025462).
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 18; 16(1):1404-1415
oa_package/f3/a6/PMC10788822.tar.gz
PMC10788824
38157306
Introduction The ability to achieve quantitative detection of specific molecules, chemical markers, or nanoscale assemblies in bodily fluids is at the heart of medical laboratory diagnostics. 1 Examples range from early detection of toxins, 2 microbiological agents, 3 or tumoral biomarkers, 4 to the routine monitoring of patients with chronic diseases such as diabetes, 5 thrombophilia, 6 or myelodysplastic syndromes. 7 Typical diagnostic bioassays comprise enzyme immunohistochemistry, 8 liquid chromatography, 6 protein and genetic electrophoresis, 9 and blood cultures. 10 While these methods offer good levels of sensitivity and specificity, they suffer from several drawbacks. First, they tend to be time-consuming and require purification and preparation steps, which can potentially alter the antigen under investigation and affect the detection itself. 9 Second, they are often expensive due to their complexity and the need for reagents or advanced experimental setup. Finally, they tend to require relatively large volumes of bodily fluids. Part of the problem comes from the complexity of bodily fluids, which contain a wide size and compositional range of molecules, proteins, biopolymers, and cells, often at concentrations significantly larger than that of the desired detection target. Additionally, most bodily fluids are highly non-Newtonian, 11 and their inherent compositional heterogeneity renders physical detection methods such as mechanical resonators 12 or nanofluidics-based approaches 13 challenging. As a result, detection methods operating directly into raw bodily fluids are sparse, especially when aiming to detect and quantify a specific target within that fluid. Being able to operate with small quantities of unprocessed bodily fluids could prove a game changer for detection and medical prognosis, potentially cutting costs and diagnosis time as well as offering measurements better reflecting the natural environment of the desired target. Here, we show that quantitative measurements can be achieved in single drops of saliva by combining immunorecognition with mechanical detection optimized to operate on the correct scale. Saliva is arguably one of the most challenging bodily fluids to operate in given the presence of large biopolymers forming gel–liquid structures, but this can be overcome by accordingly adapting the mechanical sensing. The choice of saliva is also motivated by its potential for noninvasive and real-time diagnostics of infective and neoplastic diseases. 14 , 15 To illustrate the capabilities of our method, we target the model and native extracellular nanovesicles (EVs). EVs are small (30–300 nm) phospholipid-based vesicles present in most bodily fluids including blood, saliva, and urine. 16 , 17 They are secreted by cells into the surrounding connective matrix and are naturally used as vehicles to cargo small molecules, proteins, and nucleic acids between distant cells and throughout the body. 16 EVs play a key role as autocrine and paracrine signals 16 regulating multiple cellular functions from growth and apoptosis 18 to gene expression and antigen presentation. 19 They have been suggested as biomarkers for early detection and monitoring of various diseases including cancer, 20 , 21 diabetes and metabolic conditions, 5 , 22 neurodegenerative pathology, and viral or microbiological infections. 23 Several pathologies promote the release of specific EVs with a unique combination of antigens in terms of both type and concentration. 17 However, routine use of EVs in diagnostics is currently still limited by the costs and slowness previously highlighted. Additionally, existing characterization methods tend to focus on genetics and proteomics and require relatively expensive purification and concentration of milliliters of bodily fluids 24 , 25 with no accepted standards. Using vibrating microcantilevers suitably functionalized, we are able to bypass these issues and quantify specific EV populations directly into saliva. Vibrating microcantilevers have long been used as biosensors, 26 , 27 including proposed approaches for cancer detection, 12 but operating in liquids tends to limit the sensitivity of the technique, 28 and most applications rely on sensing in vacuum, air, or purified solutions. 26 More recent developments in the field of nanomechanical systems have focused on optomechanical resonators 29 or sophisticated bespoke systems 30 to achieve high levels of sensitivity. However, operating directly in complex biological fluids remains a significant challenge, limiting applicability, use, and direct diagnostics. Here, we show that saliva exhibits scale-dependent viscoelasticity, a property that we exploit to operate in raw saliva with a sensitivity comparable to that achieved in pure water. The proposed approach can be upscaled, parallelized, and in principle applied for the detection of a wide range of targets directly in complex fluids.
Testing the Method with Model EVs Based on the microrheology results, we use microcantilevers coated with an antifouling zwitterionic layer and oscillating with an amplitude smaller than S = 25 nm. In practice, the smaller the oscillation amplitude, the better, providing a sufficient signal-to-noise ratio. This is typically achieved using microcantilevers as small and stiff as possible, thereby ensuring a comparatively high resonance frequency and quality factor (see Supporting Information section 4 ) and hence sensitivity. Here, we use Olympus AC55 cantilevers (see Experimental Section ) which offer some of the highest resonance frequencies and quality factors among commercially available levers. To validate the proposed approach, we conducted a set of experiments aiming to quantify the amount of synthetic model EVs dissolved into raw saliva. The interest of using model EVs is twofold: first, since the saliva sample is prepared with a known concentration of model EVs, it allows for independent determination of the setup sensitivity. The concentration of a specific native EV’s subpopulations in saliva varies between individuals 47 and experiments, 47 , 48 making any independent measurements highly challenging. Here, the model EVs occur precisely as one of these subpopulations but with a unique protein marker and a specific concentration. Second, a comparison of the known EV concentrations with the measured quantities allows for calibration of the setup. To best mimic natural EVs in size and composition, we create 100 nm gel-phase phospholipid (DPPC) vesicles, with 0.5% of the lipids exposing a tether biotin acting as a specific EV marker ( Figure 3 a). The model EVs, dissolved in a standard phosphate buffer saline (PBS) solution, are then mixed with the raw saliva to achieve the desired final concentration, but always ensuring that the EV solution represents only 5% of the total saliva volume to minimally affect saliva’s properties ( Figure 3 b). The microcantilevers are functionalized with the same gel-phase phospholipid bilayer containing 0.5% of biotinylated lipid headgroups. In this configuration, 99.5% of the zwitterionic headgroups act as a relatively robust antifouling layer with the membrane in gel phase; the biotinylated headgroups can specifically bind the model EVs after further functionalization of the microcantilever with streptavidin ( Figure 3 c,d). An AFM is used to track any changes in the cantilever resonance over time, allowing for quantification of the mass uptake associated with specific EV binding to the cantilever (see Supporting Information sections 4–5 and Figures S4–S5 for more details). The results show a clear sensitivity to the model EVs binding to the cantilever ( Figure 3 e), with meaningful measurements achieved at concentrations down to 0.3 μg/mL in a single drop (100 μL) of saliva. The sensitivity threshold appears to be around 0.3 μg/mL, where the readout becomes close to the control. While the natural concentration of native EVs in saliva is not known, various studies estimate a range 1 or 2 orders of magnitude greater than the present sensitivity achieved. 49 , 50 Interestingly, a rapid uptake is visible over the first 5–10 min, followed by a slower uptake also present in the control experiment (pure saliva). This suggests that a quantitative readout is possible in less than 30 min despite the small sample volume and the absence of any sample preparation or conditioning of the sample. This compares favorably to the standard EV characterization methods based on affinity columns, 51 provided no quantification of the EVs’ encapsulated cargo is needed. A consistent analysis of the results was achieved by globally fitting all the experimental results with a double exponential and imposing the same two time scales for all the experiments ( Figure 3 f). The evolution of the mass uptake with time, m ( t ), is fitted with the following equation: where M is the maximum mass uptake in each experiment, and m 1 and m 2 are the concentration-dependent fitting coefficients associated with the global time scales τ 1 and τ 2 . The use of a double exponential model to describe an adsorption process evolving over two distinct time scales is usually referred to as the Largitte double step kinetics model. 52 The initial rapid uptake is not visible for the control and the lowest EV concentration (0.1 μg/mL), and the associated coefficient m 1 are hence set to zero. When plotting M against the EV concentration C present in saliva at the start of the experiment ( Figure 3 f), a two-regime behavior emerges. Near the detection threshold, M increases rapidly with C , likely limited only by diffusion of the target EVs to the surface of the sensing cantilever. As more and more EVs get tethered to the surface, the binding rate decreases due to the fact that diffusing EVs need to find an uncovered region of the cantilever to bind. In this interpretation, this second regime dominates at larger C where it reduces the dependence of M over C , as visible in Figure 3 e. Significantly, since the evolution of M with C is determined by the ability of EVs to bind to the microcantilever, Figure 3 f effectively acts as a calibration curve for microcantilevers with the specific surface geometry used here. Additionally, since the technique requires only a drop of fluid, it can easily be multiplexed to simultaneously quantify multiple EV targets and improve accuracy.
Results Scale-Dependence of Saliva’s Viscoelasticity Saliva, like most bodily fluids, is a complex fluid and exhibits a nonlinear viscoelastic behavior upon applied mechanical strain. 37 Although the specific properties of saliva are person-, time-, and condition-dependent, 38 , 39 typical rheological measurements reveal viscosities 1 order of magnitude larger than for water in identical conditions ( Figure 1 ). This is due to the presence of numerous large biopolymers which underpin saliva’s non-Newtonian behavior, 39 even preventing flow through a 200 nm filter. 40 At the macroscopic scale, the viscoelastic differences between pure water and saliva are obvious at all accessible shear rates ( Figure 1 a) and consistent with previous studies. 39 At the nanoscale, viscomechanical sensing can be performed with vibrating microresonators. 29 Considering the rheological findings, mechanical sensing in saliva could be expected to induce a significant reduction in the frequency and amplitude of any vibrating resonators compared to pure water. Interestingly, this is not necessarily the case, as illustrated here using a vibrating commercial microcantilever immersed into either water or saliva and operated with an AFM ( Figure 1 b). We could not observe any significant variation between the measurements obtained in pure water and saliva for oscillation amplitudes smaller than ∼20 nm: the amplitude and frequency of the microcantilever’s resonances remain broadly unchanged ( Figures 1 b and S1 for more details). This suggests that the cantilever experiences a nearly identical environment between water and saliva. To further quantify this observation, we use the resonance frequencies of the vibrating microcantilever to determine the viscosity of saliva as experienced at the probed vibration frequencies. 41 A constant viscosity value identical to that of pure water within error was found at both 32 and 98 kHz (second and third resonances, inset Figure 1 b, see also Supporting Information section 1 for further details). A careful comparison of all the resonances in water and in saliva ( Figures 1 b and S1 ) indicates that the same conclusion holds, regardless of the frequency probed: no systematic frequency shifts to lower values are observed from water to saliva as would have been expected for a higher viscosity liquid. 41 To rationalize this apparently counterintuitive finding, it is necessary to consider the structure of saliva across scales. Saliva can be understood as a (bio)polymeric mesh, filled primarily with water. 42 Proteins and small biological objects such as EVs are dissolved into the water and fill the mesh structure where they can diffuse freely within gaps. The mesh itself is not a static cross-linked structure, but any structural rearrangement of the polymeric network is significantly slower than that of the water it contains, hence conferring saliva its non-Newtonian macroscopic behavior. Because standard rheological measurements are macroscopic, they are dominated by the viscoelastic behavior of the polymeric mesh under strain, with its pronounced viscous and elastic responses (see Supporting Information Figure S2 ). In contrast, microcantilevers operate with nanoscale oscillation amplitudes comparable in size to the gaps naturally existing in the polymeric mesh. If the oscillation amplitude A of the vibrating cantilever is smaller than the average gap size S of the mesh, the cantilever primarily experiences the viscous water diffusing within the mesh, with limited impact of the polymers on the measurements. It is important to keep in mind that, while useful, the idea of saliva as a mesh is a simplification of reality, and no single value of S exists. Instead, S can be understood as an effective size marking the transition from a polymer-dominated (at larger scale) to a water-dominated viscoelastic behavior at smaller scale. This scale-dependent viscoelasticity has previously been reported 43 in complex fluids and is likely common given their hierarchical structure. Here, we exploit this scale-dependence to enhance the detection capability of microcantilevers operating directly into saliva: if the oscillation amplitudes are small enough, the sensing microcantilever effectively operates in a simple aqueous solution, where a significantly better signal-to-noise ratio can be achieved. To achieve enhanced microcantilever detection in saliva, we first set out to objectively identify the value of S . The fact that saliva cannot flow through a ∼200 nm filter while water can easily pass through it 13 , 40 suggests that S < 200 nm. This is consistent with the fact that no significant differences between water and saliva could be observed for immersed microcantilevers vibrating with amplitudes below ∼20 nm ( Figure 1 b), suggesting S to be in the 20–200 nm range. To independently quantify S , we used microrheology 44 with tracers ranging from 70 to 370 nm (see Supporting Information section 3 and Table S1 ). If the tracers are able to diffuse freely within the mesh ( Figure 2 a), their mean square displacement (MSD) is expected to follow standard Brownian diffusion and grow linearly over time. 44 , 45 In contrast, if the tracers’ diffusion is hampered by the mesh ( Figure 2 b,c), a subdiffusive 45 behavior is expected whereby the MSD is proportional to the time at a power α < 1. It is therefore convenient to track the anomalous diffusion exponent α for each tracer in order to distinguish “free” (α = 1) from mesh-hindered (α < 1) diffusion. An example of microrheological measurements is shown in Figure 2 d,e, carried out with silica spherical nanoparticles as tracers. The average diameter of the particles is 73 ± 6 nm (50 nm nominal, see Table S1 ), and the use of silica tracers is motivated by the fact that the surface of the microcantilevers is primarily silica. As immediately obvious from the measurements, when operating in saliva, it is crucial for the tracers to be coated with an antifouling layer so as to prevent nonspecific binding to saliva’s constituents. In principle, nonspecific binding can occur with salivary proteins (e.g., mucin fibers, lactoferrin, IgA), ions (calcium, phosphate, carbonate, and thiocyanate ions), 1 and the biopolymeric mesh itself, resulting in a lower mobility of the tracers ( Figure 2 c). The issue of biofouling is common to most measurements in biologically active environments, 13 often deteriorating the accuracy and precision of the measurements over time. 46 Here, it must be addressed since the microcantilever-based detection strategy implicitly assumes that saliva’s biopolymers do not bind to the cantilever but rather move around it if occasionally disturbed. If the polymers attach to the cantilever, the latter becomes part of the mesh and hence primarily measures the mesh’s viscoelastic behavior, something we aim to prevent. To tackle this issue, we coat the tracers with a self-assembled zwitterionic lipid bilayer. The zwitterionic nature of the lipid headgroups significantly reduces unwanted interactions and enhances the tracers’ diffusion, as evidenced by the clear increase in α (black to red, Figure 2 d,e). Coating with a zwitterionic bilayer is a simple and effective antifouling strategy and is systematically used hereafter. Practically, the coating also increases the measured diameter of the tracers by ∼8 nm, consistent with the size of two bilayers ( Supplementary section 3 and Table S1 ). Focusing on the evolution of the anomalous diffusion coefficient for different size tracers ( Figure 2 f), it is immediately clear that the larger the size of the tracer, the smaller the value of α. In other words, larger tracers are more hindered by the polymeric mesh, resulting in an accentuated subdiffusive behavior regardless of time. By plotting the α value as a function of the tracers’ size at set times, it is possible to infer the tracer size S which would satisfy α = 1 ( Figure 2 f inset). We find S = 25 ± 10 nm regardless of the time considered. This provides an objective estimate for S , indicating that smaller objects can, on average, diffuse freely through saliva as if it were a purely Newtonian fluid. Testing the Method with Model EVs Based on the microrheology results, we use microcantilevers coated with an antifouling zwitterionic layer and oscillating with an amplitude smaller than S = 25 nm. In practice, the smaller the oscillation amplitude, the better, providing a sufficient signal-to-noise ratio. This is typically achieved using microcantilevers as small and stiff as possible, thereby ensuring a comparatively high resonance frequency and quality factor (see Supporting Information section 4 ) and hence sensitivity. Here, we use Olympus AC55 cantilevers (see Experimental Section ) which offer some of the highest resonance frequencies and quality factors among commercially available levers. To validate the proposed approach, we conducted a set of experiments aiming to quantify the amount of synthetic model EVs dissolved into raw saliva. The interest of using model EVs is twofold: first, since the saliva sample is prepared with a known concentration of model EVs, it allows for independent determination of the setup sensitivity. The concentration of a specific native EV’s subpopulations in saliva varies between individuals 47 and experiments, 47 , 48 making any independent measurements highly challenging. Here, the model EVs occur precisely as one of these subpopulations but with a unique protein marker and a specific concentration. Second, a comparison of the known EV concentrations with the measured quantities allows for calibration of the setup. To best mimic natural EVs in size and composition, we create 100 nm gel-phase phospholipid (DPPC) vesicles, with 0.5% of the lipids exposing a tether biotin acting as a specific EV marker ( Figure 3 a). The model EVs, dissolved in a standard phosphate buffer saline (PBS) solution, are then mixed with the raw saliva to achieve the desired final concentration, but always ensuring that the EV solution represents only 5% of the total saliva volume to minimally affect saliva’s properties ( Figure 3 b). The microcantilevers are functionalized with the same gel-phase phospholipid bilayer containing 0.5% of biotinylated lipid headgroups. In this configuration, 99.5% of the zwitterionic headgroups act as a relatively robust antifouling layer with the membrane in gel phase; the biotinylated headgroups can specifically bind the model EVs after further functionalization of the microcantilever with streptavidin ( Figure 3 c,d). An AFM is used to track any changes in the cantilever resonance over time, allowing for quantification of the mass uptake associated with specific EV binding to the cantilever (see Supporting Information sections 4–5 and Figures S4–S5 for more details). The results show a clear sensitivity to the model EVs binding to the cantilever ( Figure 3 e), with meaningful measurements achieved at concentrations down to 0.3 μg/mL in a single drop (100 μL) of saliva. The sensitivity threshold appears to be around 0.3 μg/mL, where the readout becomes close to the control. While the natural concentration of native EVs in saliva is not known, various studies estimate a range 1 or 2 orders of magnitude greater than the present sensitivity achieved. 49 , 50 Interestingly, a rapid uptake is visible over the first 5–10 min, followed by a slower uptake also present in the control experiment (pure saliva). This suggests that a quantitative readout is possible in less than 30 min despite the small sample volume and the absence of any sample preparation or conditioning of the sample. This compares favorably to the standard EV characterization methods based on affinity columns, 51 provided no quantification of the EVs’ encapsulated cargo is needed. A consistent analysis of the results was achieved by globally fitting all the experimental results with a double exponential and imposing the same two time scales for all the experiments ( Figure 3 f). The evolution of the mass uptake with time, m ( t ), is fitted with the following equation: where M is the maximum mass uptake in each experiment, and m 1 and m 2 are the concentration-dependent fitting coefficients associated with the global time scales τ 1 and τ 2 . The use of a double exponential model to describe an adsorption process evolving over two distinct time scales is usually referred to as the Largitte double step kinetics model. 52 The initial rapid uptake is not visible for the control and the lowest EV concentration (0.1 μg/mL), and the associated coefficient m 1 are hence set to zero. When plotting M against the EV concentration C present in saliva at the start of the experiment ( Figure 3 f), a two-regime behavior emerges. Near the detection threshold, M increases rapidly with C , likely limited only by diffusion of the target EVs to the surface of the sensing cantilever. As more and more EVs get tethered to the surface, the binding rate decreases due to the fact that diffusing EVs need to find an uncovered region of the cantilever to bind. In this interpretation, this second regime dominates at larger C where it reduces the dependence of M over C , as visible in Figure 3 e. Significantly, since the evolution of M with C is determined by the ability of EVs to bind to the microcantilever, Figure 3 f effectively acts as a calibration curve for microcantilevers with the specific surface geometry used here. Additionally, since the technique requires only a drop of fluid, it can easily be multiplexed to simultaneously quantify multiple EV targets and improve accuracy. Detecting Specific Natural EV Subpopulations in Human Saliva The results presented in Figure 3 validate the possibility of EV detection directly in bodily fluids using vibrating microcantilevers. However, in the absence of a precise reference or accepted standard for the EV populations naturally present inside bodily fluids, it is not yet obvious whether the method can achieve meaningful measurements of natural EV subpopulations. To test this, cantilevers identical to those used in Figure 3 are functionalized with antibodies able to selectively bind common natural markers. We selected members of the tetraspanin family (CD9 and CD81) as markers. 53 , 54 CD9 and CD81 are cell surface glycoproteins which mediate a wide range of cell functions from B–T cell interactions to platelet activation 53 and aggregation. Naturally occurring EVs with these markers are present in every bodily fluid of a healthy human being, 53 but variations in their concentration could indicate neoplastic evolution 54 or other diseases. 55 Even if the range of concentration in healthy subjects remains to be determined with currently no accepted standard, the associated EVs have been suggested for early cancer diagnostics. 54 , 56 Here, they are used as a generic test for the setup’s capabilities. Figure 4 shows the results for the detection of natural EVs exhibiting CD9 or CD81 in the saliva of two different healthy individuals. The functionalization process is similar to that used for model EVs but with an additional step wherein a biotinylated version of the desired antibody is tethered to the exposed streptavidin receptor of the cantilever. Informed by the calibration experiment ( Figure 3 ), the measurements are conducted over only 60 min and analyzed using the same double exponential fitting with the values of τ 1 and τ 2 imposed as the values found in Figure 3 (τ 1 = 346 s, τ 2 = 5781 s) except for the baseline fitted with a single exponential. The results show a clear difference in mass uptake between the baseline (black) and the cantilever functionalized with the tetraspanin antibodies (blue and red). The data are well fitted by the double exponential function with the imposed time scale, confirming the generality of the model in this configuration. Interestingly, the results highlight interindividual differences in the total content and ratio of EVs expressing CD9 and CD81 antigens. This further shows the potential of the proposed method to contribute to novel diagnostics and developing personalized medicine. Using the results from Figure 3 f as a calibration, we estimate that the concentrations of the EVs expressing CD9 and CD81 antigens are respectively 10.3 ± 0.9 and 1.5 ± 0.2 μg/mL for individual 1 and 8.2 ± 1.1 and 36.1 ± 9.0 μg/mL for individual 2. We emphasize that these values cannot be independently verified in this study and were derived on the implicit assumptions that the setup behaves similarly as with model EVs, with a similar affinity for the microcantilever and without any uspecific fusion of the EVs with the cantilever’s antifouling lipid layer. 57 These assumptions are not obvious and will require further work to confirm their reliability and benchmark the technique. 58 Additionally, complications with the current multistep functionalization make it difficult to achieve consistent control measurements (see Supporting Information section 6 and Figures S6–7 ). Nevertheless, the present results show that the setup has the potential to detect and quantify specific subpopulations of EVs naturally occurring in human saliva.
Discussion Complex fluids are ubiquitous in nature from bodily fluids to water waste, oil reservoirs, lubricants, and food products. They play a key role in science and technology and are at the forefront of active research. 59 Most complex fluids are composed of a mixture of macromolecules dissolved in a Newtonian fluid, with the dynamics of these macromolecules conferring the resulting solution its non-Newtonian behavior. Here, we show that saliva exhibits a scale-dependent viscoelastic behavior due to the finite size of the biopolymers forming a macromolecular mesh-like structure. At the nanometer level, saliva can be considered as a Newtonian fluid, whereas the non-Newtonian behavior emerges at a scale characteristic of the mesh. While we only investigate saliva in this study, the observed scale-dependent viscoelasticity is likely to be valid for many complex fluids that exhibit a similar structure over scales, but the relevant scales are likely to be fluid specific. For saliva alone, autoimmune pathologies altering the content of mucin fibers and antibodies in saliva and other bodily fluids 1 could influence the diffusion length scale of tracers. This is also the case for any other complex fluids whose specific composition is expected to impact the diffusion length scale and, consequently, the mobility of sensing microcantilevers. Thus, the design of any detection methods for complex fluids should include a “calibration step” assessing this characteristic length scale. We show that the scale-dependence can be exploited for conducting mechanical sensing directly in the complex fluid while retaining the signal-to-noise ratio normally only possible in simple Newtonian fluids. Calibration and testing of the setup with model lipid vesicles dissolved in raw saliva show a sensitivity in the picogram range, with the intrinsic detection noise level of the setup below this range. Based on the size and composition of the model EVs, we estimate their mass to be on the order of 0.001 pg (∼1 femtogram/model EV), suggesting an effective EV detection sensitivity in the range of 500–1000 EVs. The mass of natural EVs is likely significantly higher when taking into account the nucleic and proteic compounds, thereby increasing the sensitivity per EV. Additionally, the system used here solely relies on commercial equipment and could be significantly improved. First, the sensing cantilevers could be replaced by bespoke cantilevers with a geometry optimized for maximizing the sensing surface while retaining the ability to operate with small amplitudes and high frequencies. Second, the sensing could be developed with self-actuating microresonators, bypassing the need for the expensive laser system of the AFM. Finally, the process is suitable for parallelizing with multiple sensors operating over the same drop-size sample. This would open the possibility for simultaneous repeats and averaging as well as complementary detection of multiple targets, thereby improving the statistical accuracy and predictive power of the detection. Detection of native EVs expressing the tetraspanin CD9 and CD81 antigens suggests that the setup is able to directly pick these subpopulations from saliva samples of healthy individuals. Here, tetraspanins are used as a test owing to their ubiquity in bodily fluids EVs. Several studies suggest that CD9 and CD81 expression may have a clinical significance in neoplastic diseases, but these are usually not seen as specific enough to represent a clear diagnostic tool. 60 − 65 In fact, the results shown in Figure 4 indicate significant variations in the concentration detected between healthy individuals (see also Supporting Information section 7 and Figure S8 for more details). Variations may also occur over time for the same healthy individual, something we did not explore. Further work is needed to assess the suitability of the method on more specific targets, something potentially more challenging to achieve if the associated EV subpopulation is significantly smaller. Several antigens have already been identified for immunocapture of EVs from patients with different diseases including cancer, and the method could make a significant difference in routine testing and early detection, especially considering the relatively rapid readout (<20 min). However, while it is generally well established that EVs carry information about diseases such as lung, 8 esophageal, 66 pancreatic, 67 and breast 68 cancers, there is no robust consensus on the most reliable markers with multiple candidates reported. It is therefore necessary to comparatively test several candidate markers on biopsies from healthy, precancerous, and malignant cancer patients. If successful, this would fully validate the technology and open the door for testing other potential diseases with this method.
Conclusion In this study, we show that the viscoelastic properties of saliva are scale-dependent, with the liquid behaving as a Newtonian fluid at the nanoscale. This is due to the comparatively large scale of the dissolved biopolymers and cell materials that create a slow-evolving mesh through which water and small molecules can move freely. Using microrheology, we demonstrate that tracers smaller than ∼25 nm can diffuse freely provided they do not interact with the biomesh. We exploit this finding to achieve quantitative mechanically based detection of model lipid nanovesicles with a specific biomarker directly inside drops of unprocessed saliva. We illustrate the potential of the technique to detect specific subpopulations of EV based on proteic markers. More work is needed to independently benchmark the technique and confirm the detection of specific EVs. The fact that the detection method is mechanical could also be further exploited to sense pathological variations in the EVs’ mechanical properties, 69 for example using higher vibration eigenmodes of the lever. 12 , 27 Finally, although the focus of the present study is on EV quantification motivated by cancer detection, the method can be in principle applied to the detection of any kind of nano-objects in suitable complex fluids, from viruses to nanoparticles and toxins. It could be particularly useful where samples are limited in quantity or where a relatively rapid answer is needed, for example, the analysis of toxicity changes and pollutant level after each treatment step in wastewater recovery.
Extracellular nanovesicles (EVs) are lipid-based vesicles secreted by cells and are present in all bodily fluids. They play a central role in communication between distant cells and have been proposed as potential indicators for the early detection of a wide range of diseases, including different types of cancer. However, reliable quantification of a specific subpopulation of EVs remains challenging. The process is typically lengthy and costly and requires purification of relatively large quantities of biopsy samples. Here, we show that microcantilevers operated with sufficiently small vibration amplitudes can successfully quantify a specific subpopulation of EVs directly from a drop (0.1 mL) of unprocessed saliva in less than 20 min. Being a complex fluid, saliva is highly non-Newtonian, normally precluding mechanical sensing. With a combination of standard rheology and microrheology, we demonstrate that the non-Newtonian properties are scale-dependent, enabling microcantilever measurements with a sensitivity identical to that in pure water when operating at the nanoscale. We also address the problem of unwanted sensor biofouling by using a zwitterionic coating, allowing efficient quantification of EVs at concentrations down to 0.1 μg/mL, based on immunorecognition of the EVs’ surface proteins. We benchmark the technique on model EVs and illustrate its potential by quantifying populations of natural EVs commonly present in human saliva. The method effectively bypasses the difficulty of targeted detection in non-Newtonian fluids and could be used for various applications, from the detection of EVs and viruses in bodily fluids to the detection of molecular clusters or nanoparticles in other complex fluids.
Experimental Section Ultrapure water was purchased from Water AnalaR NORMAPUR, VWR International Ltd., Leicestershire, UK. For all of the experiments using saliva, fresh samples were obtained from healthy volunteers (two of the authors) and used on the same day. The saliva samples were collected in the morning between 7:00 am–9:00 am after overnight fasting, directly into a sterile glass vial, and used without any further processing or purification. The samples not immediately used were kept at 5 °C (measurements conducted later in the day) and warmed to the measuring temperature (25 °C) immediately before use, taking care to keep the sample homogeneous. For the experiments using model EVs, the desired quantity of EVs was added to the sample, which was then homogenized in a mild sonication bath (see details in the Model EVs section hereafter). Shear Rheometry Shear rheometry was performed using a commercial Advanced Rheometer model AR 2000 (TA Instrument, New Castle, DE, USA), equipped with 8 mm parallel plates and an environmental test chamber under nitrogen gas. The fluids were compressed between the parallel plates under atmospheric pressure until a gap of approximately 1 mm thickness and a small normal force was registered by the rheometer. To determine the full rheological response, oscillatory tests were performed at angular frequencies between 0.1 and 600 rad/s, and with strain amplitudes of 1%, after examination of the dynamic strain sweep as a function of frequency. The temperature was kept constant at 25 °C. Microrheology Microrheology measurements were performed using a Malvern Zetasizer NanoZSP (Malvern Panalytical, Worcestershire, UK). The tracer particles were silica nanospheres with nominal diameters of 50, 100, and 300 nm monodispersed in water with a concentration of 10 mg/mL (nanoComposix, San Diego, CA, USA). The silica particles were then diluted in water or saliva to a final concentration of 0.1 mg/mL and tip-sonicated for 45 s to remove any aggregates. Before microrheological measurements were conducted, standard dynamic light scattering (DLS) ensured a monodisperse distribution of the particles. Lipid-Coating of the Tracers Lipid-coating of the tracers was performed by incubating small unilamellar lipid vesicles (SUVs) of 1,2-dioleoyl- sn -glycero-3-phosphocholine (DOPC) into a PBS solution (137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , and 1.8 mM KH 2 PO 4 at pH 7.4) containing the dissolved silica tracers. To ensure full coating, we estimated the total surface area of the tracers in solution and used a 10-fold excess of fluid lipid vesicles (in terms of total bilayer area) adsorbing onto the silica particles. The particles were then diluted in PBS to the desired concentrations for the experiment. DOPC was purchased in liquid form, dissolved in chloroform (Avanti Polar Lipids, AL, USA), and used without any further purification. After chloroform evaporation in a vacuum overnight, lipids were resuspended in a PBS solution at a final concentration of 10 mg/mL. PBS solution was produced using preprepared tablets (Sigma-Aldrich, St Louis, MO, USA). SUVs of diameter ∼100 nm were obtained by bath-sonicating the lipid solution at 25 °C for 10 min to produce a uniformly clear solution, followed by extrusion through a 100 nm filter (WhatMan, Sigma-Aldrich) with at least 31 passes, and then used immediately. Model EVs Model EVs were prepared with a biotinylated lipid mixture. The lipid mixture comprised 99.5% dipalmitoylphosphatidylcholine (DPPC) and 0.5% biotinylated-DPPE (1,2-dipalmitoyl- sn -glycero-3-phosphoethanolamine). Both lipids were purchased from Avanti Polar Lipids, AL, USA, and mixed to the desired ratio in chloroform. The chloroform was then evaporated in a vacuum overnight. After resuspending the lipids in PBS and bath-sonicating them at 60 °C for 15 min, SUVs were then prepared by extrusion through a 100 nm filter. The desired proportion of model EVs was then added to the saliva samples and bath-sonicated for 5 min before use. Cantilever Functionalization Cantilever functionalization was performed using the same lipid mixture used for the model EVs (99.5% DPPC + 0.5% biotinylated DPPE). Before functionalization, the cantilevers underwent thorough cleaning procedures to ensure the removal of any potential contaminants. 31 − 33 The cantilevers were immersed in a bath of ultrapure water, followed by propan-2-ol (Merck Millipore, Billerica, MA, USA), and finally ultrapure water, for 60 min at each step. The propan-2-ol was used as purchased without further purification. The cantilevers were then exposed to low-pressure air plasma, at a pressure of 1 mbar and power of 300 W (VacuLAB Plasma Treater, Tantec) for 30 s. Plasma-oxidation increased the hydrophilicity of the cantilevers and removed unwanted carbon contaminants. A drop of SUVs (150 μL) with a concentration of 1 mg/mL was drop-cast on the cantilever. After a 20 min incubation, the cantilevers were gently rinsed with freshly prepared PBS and left soaking in clean PBS for 1 h to ensure removal of any excess nonadsorbed vesicles. After this, the cantilevers were rinsed again with PBS. The cantilevers were then further functionalized using streptavidin (Thermofisher, UK), which acted as a bridge between the two biotinylated lipid bilayers forming the EVs’ surface and the cantilever coating. The streptavidin functionalization was performed by first soaking the probes in a solution containing 0.1 mg/mL streptavidin in PBS. After incubation for 1.30 h, the cantilevers were gently rinsed with PBS and then left soaking in clean PBS for 1 h. The cantilevers used in the detection of naturally occurring EVs needed a further functionalization step. This involved the binding of anti-CD9 and anti-CD81 monoclonal antibodies to the streptavidin functionalized cantilevers. The antibodies were obtained by recombinant DNA technique, with a human species reactivity and purchased from ABCAM, UK. The binding of the antibodies to the streptavidin was performed by biotinylation conjugation using the specific Biotin Conjugation Kit (Fast, Type A) Lightning-Link (ABCAM, UK). The cantilevers used for the negative controls were functionalized with streptavidin ( Results section: Testing the Method with Model EVs ) and pure DPPC ( Results section: Detecting Specific Natural EV Subpopulations in Human Saliva ). The design of suitable controls is further discussed in the Supporting Information . Atomic Force Microscopy (AFM) The experiments on the dynamic response of vibrating microcantilevers in water and saliva were conducted using a commercial Cypher ES AFM (Oxford Instruments, CA, USA) instrument equipped with temperature control. We used two types of commercial cantilevers with each cantilever calibrated using its thermal spectrum. 34 Initial experiments ( Figure 1 ) were conducted with OMCL-RC800PSA silicon oxide cantilevers (Olympus, Japan) that exhibit a stiffness of 0.1–0.2 N m –1 and a resonance frequency of 20 ± 5 kHz in air. All the quantitative measurements were conducted with stiffer and shorter AC55-TS silicon oxide cantilevers (Olympus, Japan), which exhibit a typical flexural stiffness of 50 ± 5 nN/nm and a resonance frequency of 1600 ± 300 kHz in air. High-quality V1 muscovite mica discs (SPI supplies, West Chester, PA, USA) acted as a substrate where to deposit the fluid of interest; 150 μL of the fluid of interest was deposited on the mica substrate. All the experiments were performed at 25.0 °C to ensure thermal stability. 35 , 36 Thermal equilibrium is achieved ensuring that the cooling/heating rate of the temperature control system within the AFM is constant for at least 20 min. After thermal equilibrium was achieved, the cantilever was then fully immersed in the fluid, with its motion driven by photothermal excitation.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c12035 . Example thermal spectra of AFM cantilevers in air, water, and saliva; discussion of storage and loss moduli from standard rheological measurements; characterization of the microrheology tracers; details about the acquisition and analysis of the mass uptake data; characterization of the cantilevers’ functionalization; disucssion of the baseline noise and negative controls; Figures S1–S8 and Table S1 ( PDF ) Supplementary Material The authors declare no competing financial interest. Acknowledgments This work was funded by the Institute of Advanced Studies and the Biophysical Science Institute (Durham University) and a Northern Accelerator Proof of Concept (grant NACCF - 224). A.E. is supported by an Australian Research Council (ARC) Discovery Early Career Research Award (DECRA) (DE220100511). We are grateful to Professor Richard Thompson for his help conducting the shear rheometer experiments.
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 29; 16(1):44-53
oa_package/d9/11/PMC10788824.tar.gz
PMC10788825
38154045
Introduction Piezoelectric materials play a vital role in various applications including sensors, actuators, resonators, and vibration energy harvesters, particularly in the rapidly developing field of micro-electromechanical systems (MEMS). 1 − 4 MEMS that use piezoelectric devices, also called piezoelectric MEMS, have advantages over their electrostatic and electromagnetic counterparts, such as a simple structure and small size. 5 , 6 To date, Pb-based piezoelectric materials, such as Pb(Zr,Ti)O 3 (PZT) and Pb(Mg 1/3 Nb 2/3 )O 3 –PbTiO 3 (PMN–PT), have been preferred because of their excellent piezoelectric properties. 7 , 8 However, environmental issues and stricter regulations such as the Restriction of Hazardous Substances (RoHS) directive have necessitated the development of alternative materials that do not contain toxic Pb. 9 − 11 In the context of next-generation smart electronics, robotics, and the Internet of Things (IoT), advances in Pb-free piezoelectric thin films are essential for addressing the requirements of high-performance, energy-efficient devices while aligning with sustainable development practices. 12 − 15 In the past, various strategies have been employed to improve the piezoelectric properties, including lattice contribution enhancement, domain reconstruction, morphotropic phase boundary (MPB) engineering, and defect control. 9 , 16 − 19 Among these, MPB engineering has achieved significant breakthroughs and established itself as an important research strategy. In particular, PZT exhibits excellent piezoelectric properties near the MPB composition, where the PbTiO 3 -rich tetragonal and PbZrO 3 -rich rhombohedral exist in approximately equal proportions. Therefore, Pb-free materials with MPB compositions have been actively studied as alternatives to PZT. Typical Pb-free MPB materials, including (K,Na)NbO 3 - and (Bi,Na)TiO 3 -based solid solutions, have been successfully used in bulk-ceramic applications. 20 − 22 Despite remarkable progress in Pb-free piezoelectric ceramics, the development of film-based piezoelectric materials for MEMS applications has challenges. A primary obstacle arises from substrate clamping, which substantially diminishes the electromechanical response, owing to the constraint of in-plane deformation of the film. 23 − 27 In addition, stress from the substrate alters the phase boundaries of piezoelectric materials, diverging from their original composition in bulk ceramics. 26 Therefore, an accurate composition adjustment is necessary to optimize performance. Piezoelectric properties typically exhibit significant sensitivity to composition near the MPB, rendering the control of these properties challenging in thin films. 7 , 28 Consequently, exploration of alternative strategies without MPB is crucial for developing piezoelectric materials that can overcome the limitations imposed by substrate clamping and stress-induced phase-boundary shifts, thereby enhancing the potential of thin-film piezoelectric materials for MEMS applications. Domain switching has recently emerged as a promising approach for enhancing the piezoelectric properties in Pb-free piezoelectric thin films for MEMS applications. This strategy focuses on microdomain structure formation, which is critical for a substantial piezoelectric response of ferroelectric thin films. The application of an electric field causes domain switching, and upon removing the electric field, the domain structure returns to its original state owing to the clamping effect of the substrate. This leads to macroscopic film deformation and large piezoelectric properties. A high piezoelectric coefficient of 310 pm/V has been reported for a tetragonal Pb(Zr,Ti)O 3 film using this domain-switching concept. 29 , 30 The factors controlling the piezoelectric response due to domain switching are the lattice anisotropy and changes in the volume fraction of the domain. For instance, the mobility of similar elastic domains in magnetostrictive materials such as Ni–Mn–Ga alloys has been reported to be dominated by the lattice anisotropy and volume fraction of the ferroelastic domains. 31 , 32 These results suggest that the mobility of the domains can be controlled by using external electric fields to enhance their contribution to the piezoelectric response. That is, if a film with a large volume fraction of in-plane polarized domains has appropriate tetragonality, a c / a ratio (where c and a are the lattice parameters along the polar axis, or c -axis, and nonpolar axis, or a -axis, respectively) can be prepared, and large piezoelectric properties can be obtained by domain switching even for tetragonal films that have an out-of-MPB composition. Nakajima et al. reported that the large c / a ratio of PbTiO 3 ( c / a ≈ 1.06) was decreased by creating a solid solution with PbZrO 3 in ferroelectric PZT thin films, and a large piezoelectric response was obtained at a tetragonal composition near Zr/(Zr+Ti) = 0.4 ( c / a ≈ 1.02), with relatively low tetragonality than that of PbTiO 3 . 29 We focused on a Pb-free material system, (1– x )(Bi,Na)TiO 3 – x BaTiO 3 (BNT-BT), which is a solid solution of tetragonal BaTiO 3 and rhombohedral (Bi,Na)TiO 3 . The materials in the ceramics exhibited tetragonal symmetry over a wide composition range of x = 0.06–1.0. 33 , 34 Our group had previously reported that a tetragonal 0.7(Bi,Na)TiO 3 –0.3BaTiO 3 ( x = 0.3) film prepared on a Si substrate exhibited considerably large transverse piezoelectric coefficients ( e 31,f = 19 C/m 2 ) owing to domain switching. 35 Rao et al. reported that the tetragonality hardly changed over a wide composition range of approximately 30 atom % for x = 0.2–0.5. 33 These results suggest the possibility that tetragonal (Bi,Na)TiO 3 –BaTiO 3 films exhibit a large piezoelectric response over a wide composition range via domain switching. Taking the concept of domain switching into account, the choice of a material system showing a stable c / a ratio, such as (Bi,Na)TiO 3 –BaTiO 3 , is optimal for achieving high piezoelectric properties over a wide composition range, which is in contrast to the continuous change of the c / a ratio with the Zr/(Zr+Ti) ratio in the case of PZT. However, there are few studies on the systematic composition dependence of the ferroelectric and piezoelectric properties of the tetragonal composition side of (1- x )(Bi,Na)TiO 3 – x BaTiO 3 films, except for the high characteristics MPB neighborhood composition, which was observed at x = 0.04–0.07 composition. 36 − 38 In this study, we investigated the composition dependence of the piezoelectric performance of (1– x )(Bi,Na)TiO 3 – x BaTiO 3 ( x = 0.06, 0.2, 0.3, 0.5, and 1.0) films on a Si substrate in the tetragonal phase to develop Pb-free piezoelectric thin films with a large piezoelectric response. As a result, we confirmed an out-of-plane piezoelectric response of d 33.f higher than 220 pm/V, which exceeded the reported value for bulk ceramics in the composition region of x = 0.2–0.5. 39 Furthermore, e 31,f of 19 C/m 2 was confirmed for samples with cantilever structures in the composition range of at least 10 atom % for x = 0.2 and 0.3. This value is the highest reported for Pb-free piezoelectric thin films and is comparable to the reported data for Pb-based materials. 15 Furthermore, importantly, the films exhibited a large property over a composition range several times wider than that of MPB, which has a limited compositional range of 1–2%. This enables obtaining steady properties for the deposited films despite their composition fluctuation. This is an advantage over films using the MPB composition, which require precise composition control, owing to the high composition sensitivity of the piezoelectric property. The present innovative concept of reversible domain switching allows for a departure from the concept of using conventional MPB compositions and allows for improved piezoelectric properties over a wider composition range. These results will expand the scope of research for piezoelectric materials, which has focused mainly on Pb-based materials, especially near MPB composition, for the last 70 years.
Results and Discussion Figure 1 (a,b) shows the out-of-plane and in-plane X-ray diffraction (XRD) patterns of the 2.0 μm thick (1– x )(Bi,Na)TiO 3 – x BaTiO 3 films with x = 0.06–1.0 on the Si substrate with buffer layers, respectively. Wider 2θ and 2θ χ range scan data are presented in Figure S1(a,b) . The out-of-plane XRD patterns shown in Figures 1 (a) and S1(a) in the Supporting Information and h 00 or 00 l diffraction peaks from tetragonal (Bi,Na)TiO 3 –BaTiO 3 were observed together with h 00 c diffraction peaks from other underlying perovskite layers with pseudocubic cells, such as LaNiO 3 and (La 0.5 Sr 0.5 )CoO 3 . In addition, the shift of the (200) diffraction peaks to lower angles with an increasing x value without obvious different phase indicates the increase in the out-of-plane lattice parameter, a -axis, and the formation of solid solution as the bulk references ( 33 , 40 ). The in-plane GIXRD patterns shown in Figures 1 (b) and S1(b) in the Supporting Information show the peaks derived from the perovskite structure. In addition, the presence of both {101} and {100} peaks in these in-plane measurements indicates that the in-plane direction is polycrystalline, thereby indicating a uniaxially oriented film. Figure S2(a,b) shows surface and cross-sectional SEM images, respectively. As shown in Figure S2(a) , the thin film is composed of grains with almost uniform size. The random shape of the grains corresponds to the in-plane polycrystalline nature of the film. In addition, as shown in Figure S2(b) , the film with a dense and no clear columnar structure was detected. Figure 2 (a,b) illustrates the composition dependence of the out-of-plane and in-plane lattice parameters obtained from out-of-plane XRD θ–2θ and in-plane GIXRD scans along with the tetragonality, defined as {(out-of-plane lattice parameter)/(in-plane lattice parameter) – 1} and presented in Figure 2 (b). The squares and diamonds represent the out-of-plane and in-plane lattice parameters, respectively, calculated from the {200} peak position of the out-of-plane XRD θ–2θ scan and the {002} peak positions on the in-plane GIXRD pattern, respectively. The previously reported c - and a -axes data for the sintered body, depicted using closed 33 and open 40 circles, and c / a ratios, depicted using triangles, are plotted in Figure 2 (a,b), respectively. The results shown in Figure 2 (a) reveal that all of the films prepared in this study have smaller c -axis values and larger a -axis values than the bulk lattice parameters. It is already ascertained that this orientation did not dramatically change by changing the underlying (La,Sr)CoO 3 to other bottom electrodes, such as SrRuO 3 (not shown here). As shown in Figure 2 (b), the observed tetragonality was considerably smaller than the reported values for ceramics. Relatively higher tetragonality was observed in the approximate composition range of x = 0.2–0.5 among the films prepared in this study, indicated using a hatch in Figure 2 , while the tetragonality of the film at x = 0.06 was nearly 0%. Considering the pseudocubic structure reported for (Bi,Na)TiO 3 –BaTiO 3 at x = 0.04–0.07 in bulk ceramics, the results presented herein for the films were almost consistent with the reported results, and thus, the (Bi,Na)TiO 3 –BaTiO 3 films reported herein have a tetragonal structure when x = 0.2–1.0. According to our previous reports, tetragonal (Bi,Na)TiO 3 –BaTiO 3 films ( x = 0.06–1.0) deposited on SrTiO 3 substrates are epitaxial films and were subjected to detailed XRD analysis. 41 , 42 These results show that the volume fraction of the (100) orientation and non-180° domain fraction of the (100)/(001)-oriented ferroelectric films are determined by the thermal strain from the substrate; the thermal expansion coefficient of the SrTiO 3 substrate is 10.9 × 10 –6 /K, 43 which is larger than that of (Bi,Na)TiO 3 –BaTiO 3 (approximately 6 × 10 –6 /K). 33 Thus, the (Bi,Na)TiO 3 –BaTiO 3 film on the SrTiO 3 substrate was confirmed to have a pure (001) orientation and c -domain structure with a polarization axis along the out-of-plane direction, which can be attributed to the in-plane compressive strain experienced during the cooling process after deposition. Conversely, Shimizu et al. reported that (Bi,Na)TiO 3 –BaTiO 3 films deposited on Si substrates have a (100) orientation and a -domain structure with a polarization axis along the in-plane direction due to the in-plane tensile strain because the Si substrate has a thermal expansion coefficient smaller than that of the films, 35 that is, 3.6 × 10 –6 /K. 44 The in-plane lattice parameters were larger than the out-of-plane parameters for all films prepared in this study. These results suggest that the (Bi,Na)TiO 3 –BaTiO 3 films on the Si substrate in this study are considered to be a -domain oriented films for all tetragonal compositions. Figure 3 shows the measured electrical properties. Figure 3 (a,b) illustrates the polarization–electric ( P – E ) curves measured at 10 kHz with various amplitudes of the triangular wave for (1– x )(Bi,Na)TiO 3 – x BaTiO 3 films at (a) x = 0.06 and (b) x = 0.2, respectively, where the amplitudes increased sequentially. The measurement was conducted on a pristine electrode without any applied electric field. For these two compositions, the remanent polarization ( P r ) was well saturated above a high electric field amplitude of 200 kV/cm; however, the manner of saturation was different, particularly at a low electric field amplitude of <150 kV/cm. Small loops were observed at small amplitudes compared with those at large amplitudes for films with x = 0.2. Figure 3 (c,d) shows the P r value as a function of the amplitude for films with x = 0.06 and 0.2, respectively. These measurements were performed twice, from low to high electric fields, for each composition film, and the results for the first and second sweeps are indicated using black squares and red circles, respectively. The black-circle plots in Figure 3 (c) indicate that the P r value shows similar behavior in the first and second cycles for the film with x = 0.06, an increase with increasing electric field amplitude, and saturation above the coercive electric field, in agreement with the gradual change in the P – E loop in Figure 3 (a). This behavior is typical of ferroelectric materials. Conversely, as represented by the circles in Figure 3 (d), the P r value of the film with x = 0.2 tends to saturate against the amplitude twice; the first saturation occurs at a relatively low value below approximately 150 kV/cm, and the second saturation occurs above 200 kV/cm through a rapid increase at 160–200 kV/cm. 30 The value after the second saturation is 1.5–2 times larger than that after the first saturation. No abrupt increase in the P r value was observed during the second cycle. Consequently, the P r value observed in the second cycle was larger than that in the first cycle, as shown in Figure 3 (d), at low-field amplitudes such as 120 kV/cm. The measured amplitude dependencies of the P r for all (1– x )(Bi,Na)TiO 3 – x BaTiO 3 ( x = 0.06–1.0) films in the first and second cycles are also plotted in Figure S2(a,b) . Two-step increments in P r are observed for films with high tetragonality in Figure 2 (b), that is, those with x = 0.2, 0.3, and 0.5. Figure 4 (a) illustrates the P – E relationships measured at 10 kHz and the electric field amplitude of 250 kV/cm from the second cycle of sweep-up for (1– x )(Bi,Na)TiO 3 – x BaTiO 3 ( x = 0.06–1.0) films. Clear hysteresis loops originating from ferroelectricity were obtained for all films. In addition, Figure 4 (b) shows the unipolar strain–electrical field ( S – E ) curves measured at 10 kHz and an amplitude of +150 kV/cm after the poling treatment with an amplitude of +250 kV/cm. The composition dependence of P r and piezoelectric properties, that is, the longitudinal effective piezoelectric response d 33.f , at an applied electric field amplitude of 150 kV/cm is presented in Figure 4 (c,d), respectively. In this study, d 33.f was defined as S max / E max , where S max and E max are the maximum strain and electric field, respectively. In Figure 4 (c,d), the circles represent P r and d 33.f , at a 150 kV/cm electric field amplitude after poling at +150 kV/cm, while the squares represent data after poling at +250 kV/cm, that is, the circles and squares in Figure 4 (c) correspond to the P r values of the first and second cycles at a 150 kV/cm amplitude, respectively, presented in Figure S3 . In Figure 4 (c), the composition dependence of the P r value shows a continuous decrease as the x value deviates from the MPB composition, x ≈ 0.06, for both the first and second cycles. The P r of the films was smaller than those reported for ceramics and c -axis oriented films on SrTiO 3 substrates. 39 , 41 This is mainly due to the a -axis orientation of these films and suppressed tetragonality in comparison with bulk ceramics, as shown in Figure 2 (b). 39 Furthermore, comparing the data for the first and second cycles of sweep-up in Figure 4 (c) revealed that the films with x = 0.2, 0.3, and 0.5 exhibit a “two-step increase” in P r value, as confirmed from the data presented in Figures 3 and S2 . These films exhibit relatively high tetragonality values, as shown in Figure 2 (b). In bulk ceramics, it has been reported that tetragonal and rhombohedral phases coexist at x = 0.05–0.07. 28 In the present study, the remanent polarization value of the film with x = 0.06 is larger than that of the tetragonal films with x = 0.2–1.0 and the a -domain as a majority orientation, suggesting that the film with x = 0.06 is possible to include a rhombohedral phase that has a [111] polar axis. The observed tetragonality (axial ratio) of almost unity also supports the existence of a rhombohedral phase in the film with x = 0.06. In Figure 4 (d), the composition dependence of d 33,f in the first cycle exhibits a trend similar to that of the ceramics, where the values decreased with the x value departing from the MPB composition, x ≈ 0.06. Conversely, in the second cycle, films with compositions of x = 0.2–0.5, where a two-step increase in P r was observed in Figure S2(b) , showed high d 33,f values beyond 220 pm/V. These values surpass the results for the film at x = 0.06, the composition closest to the MPB, and those previously reported for ceramics at x = 0.06 ( d 33,f ≈ 140 pm/V) near MPB composition. 45 In short, large d 33.f above 220 pm/V was obtained over a wide composition range of 30 atom % for tetragonal BNT-BT films far from the MPB composition and it exceeds the previously reported value for bulk ceramics with MPB composition, as expected. Here, Figure S4(a–e) shows the atomic force microscopy (AFM) images of the films with x = 0.06–1.0. The randomly arranged and uniformly shaped grains were observed for all films. These results may correspond to the in-plane polycrystalline characteristics of the films. The average roughness ( R a ) plotted against composition is shown in Figure S5 . As shown in Figures 4 (d) and S5 , these results show no strong correlation between the composition dependences of R a and d 33,f . Owing to the large thicknesses of the films, the misfit strains are considered to be almost relaxed during film deposition, which is introduced by the difference in the lattice parameters between the film and the substrate. 46 , 47 Shimizu et al. explained that the release of tensile strain accumulated during cooling after deposition at the Curie temperature resulted in the formation of BNT-BT films with a -domain dominant structures having both a - and c -axes along the in-plane direction deposited on a Si substrate with a low thermal expansion coefficient. 29 , 35 In contrast, compressive strain is generated upon cooling below T C after film deposition, mainly owing to expansion in the c -axis domain structure along the in-plane direction. This strain is relieved by an increase in the domain wall owing to an external force, such as the application of an electric field, and P r , namely, the out-of-plane polarization component, increases as shown in Figure 3 (d). This “two-step increase” of the P r value appears to begin by applying an electric field of about 150 kV/cm and saturate at about 180 kV/cm. These results suggest that the domain wall remained after the removal of the electric field, and applying a small electric field is sufficient to activate wall motion. Therefore, the domain could be reversibly moved under an applied electric field, thereby enhancing the piezoelectric response. The in situ XRD was performed under an applied electric field to ascertain the mechanism of the large piezoresponse. Figure 5 compares the XRD patterns of the (1– x )(Bi,Na)TiO 3 – x BaTiO 3 films before (black lines), under (blue lines), and after (red lines) the application of a + 150 kV/cm electric field for films with (a) x = 0.06 and (b) x = 0.2. As shown in Figure 5 (a,b), the XRD patterns before and after the application of the electric field are almost the same for both films with x = 0.06 and 0.2, and thus, the crystal structure of the films are the same, with the in-plane polarized a -domain as the main orientation. In Figure 5 (a), only the {200} peak appears in the diffraction pattern, even under the application of an electric field, and it hardly changes compared with the diffraction peaks before and after applying the electric field for the film with a x = 0.06 film. However, additional peaks located at an angle lower than the original position were observed under an electric field for the film with x = 0.2, as shown in Figure 5 (b). Considering that the original peak position was identified as a {200} peak originating from the in-plane polarized domain, the novel peak can be identified as the {002} peak from the out-of-plane polarized domain. This suggests a change in the polarization direction when an electric field is applied; that is, the a -domain switches to the c -domain by applying an electric field. When the electric field was turned off, the XRD pattern was almost the same as that before the application of the electric field, suggesting that reversible domain switching occurred due to the application of the electric field. The piezoelectric response d 33.f from the domain switching can be estimated by using the peak-fitting method. The estimated volume fraction of the c -domain was approximately 33% under the electric field, as shown in Figure S6 and Table S1 . On the basis of this change in the volume fraction, d 33,f ≈ 152 pm/V was calculated using the following equation 35 where E , V c , c , and a represent the electric field, c -domain volume fraction, c -axis lattice parameter, and a -axis lattice parameter, respectively. The subscript “0” denotes no application of an electric field. The angle of incidence of the X-rays was not 90°, causing the beam to spread over the electrode of interest in an elliptical shape with a long diameter of 400 μm and a short diameter of 100 μm. Therefore, even at the best beam position, the XRD pattern comprised approximately 42% diffraction from outside the electrode, resulting in an underestimation of the d 33,f value. If the electrode diameter completely covered the beam diameter, d 33,f was 260 pm/V, which almost agreed with the results shown in Figure 4 (b,d). This result suggests that domain switching is the origin of the large piezoelectric response of the films with x = 0.2, as shown in Figure 4 , similar to that of films with x = 0.3, as demonstrated in our previous study. 35 This can be explained by the similar tetragonality of the two films, as shown in Figure 2 (b). Finally, the transverse piezoelectric coefficients, e 31,f , which are widely used to characterize the piezoelectric properties of thin films, were measured to compare the piezoelectric properties of films prepared on other substrates and films of other materials with x = 0.2. The e 31,f were calculated from the curvature of the cantilever, owing to the actuation of the piezoelectric film on the beam. 48 , 49 Figure 6 (a) displays the estimated e 31,f versus the measured voltage, with closed and open squares representing the results for the Si and SrTiO 3 substrates for films with x = 0.2, respectively, while closed and open triangles represent the results for the Si and SrTiO 3 substrates for films with x = 0.3. The films on SrTiO 3 show e 31,f values of 4–5 C/m 2 for both compositions, which are comparable to those of other perovskite epitaxial films. 15 In contrast, the films on the Si substrate exhibited a high e 31,f value of 19 C/m 2 for both film compositions. 35 , 50 No clear “two-step increase” was detected in the dependence of e 31,f on an applied electric field. This may be due to the piezoelectric signal at the first step being too small and/or the differences in the measurement frequency and electrode size for the d 33,f and e 31,f measurements. However, as shown in Figures 4 and 6 , both d 33,f and e 31,f show large values, suggesting that domain switching was already completed when the 20 V pulsed wave was applied. Figure 6 (b) presents a comparison of the e 31,f values obtained in this study with those of previous studies on Pb-based perovskite films and Pb-free materials. 15 The e 31,f values of the (1– x )(Bi,Na)TiO 3 – x BaTiO 3 ( x = 0.2 and 0.3 35 ) films are the highest among the Pb-free materials, surpassing those of most Pb-based films, except for the relaxor Pb(Mg,Nb)O 3 –PbTiO 3 and Nb-doped Pb(Zr,Ti)O 3 films. More importantly, an e 31,f value of 19 C/m 2 was obtained for a composition range of at least 10 atom % range (for both of x = 0.2 and 0.3), which is much wider than a morphotropic phase boundary with a limited composition range of 1–2%. These findings imply that an improved piezoelectric response using domain switching can pave the way for practical applications in various devices, including those based on MEMS technology, owing to the large piezoresponse over a wide composition range.
Results and Discussion Figure 1 (a,b) shows the out-of-plane and in-plane X-ray diffraction (XRD) patterns of the 2.0 μm thick (1– x )(Bi,Na)TiO 3 – x BaTiO 3 films with x = 0.06–1.0 on the Si substrate with buffer layers, respectively. Wider 2θ and 2θ χ range scan data are presented in Figure S1(a,b) . The out-of-plane XRD patterns shown in Figures 1 (a) and S1(a) in the Supporting Information and h 00 or 00 l diffraction peaks from tetragonal (Bi,Na)TiO 3 –BaTiO 3 were observed together with h 00 c diffraction peaks from other underlying perovskite layers with pseudocubic cells, such as LaNiO 3 and (La 0.5 Sr 0.5 )CoO 3 . In addition, the shift of the (200) diffraction peaks to lower angles with an increasing x value without obvious different phase indicates the increase in the out-of-plane lattice parameter, a -axis, and the formation of solid solution as the bulk references ( 33 , 40 ). The in-plane GIXRD patterns shown in Figures 1 (b) and S1(b) in the Supporting Information show the peaks derived from the perovskite structure. In addition, the presence of both {101} and {100} peaks in these in-plane measurements indicates that the in-plane direction is polycrystalline, thereby indicating a uniaxially oriented film. Figure S2(a,b) shows surface and cross-sectional SEM images, respectively. As shown in Figure S2(a) , the thin film is composed of grains with almost uniform size. The random shape of the grains corresponds to the in-plane polycrystalline nature of the film. In addition, as shown in Figure S2(b) , the film with a dense and no clear columnar structure was detected. Figure 2 (a,b) illustrates the composition dependence of the out-of-plane and in-plane lattice parameters obtained from out-of-plane XRD θ–2θ and in-plane GIXRD scans along with the tetragonality, defined as {(out-of-plane lattice parameter)/(in-plane lattice parameter) – 1} and presented in Figure 2 (b). The squares and diamonds represent the out-of-plane and in-plane lattice parameters, respectively, calculated from the {200} peak position of the out-of-plane XRD θ–2θ scan and the {002} peak positions on the in-plane GIXRD pattern, respectively. The previously reported c - and a -axes data for the sintered body, depicted using closed 33 and open 40 circles, and c / a ratios, depicted using triangles, are plotted in Figure 2 (a,b), respectively. The results shown in Figure 2 (a) reveal that all of the films prepared in this study have smaller c -axis values and larger a -axis values than the bulk lattice parameters. It is already ascertained that this orientation did not dramatically change by changing the underlying (La,Sr)CoO 3 to other bottom electrodes, such as SrRuO 3 (not shown here). As shown in Figure 2 (b), the observed tetragonality was considerably smaller than the reported values for ceramics. Relatively higher tetragonality was observed in the approximate composition range of x = 0.2–0.5 among the films prepared in this study, indicated using a hatch in Figure 2 , while the tetragonality of the film at x = 0.06 was nearly 0%. Considering the pseudocubic structure reported for (Bi,Na)TiO 3 –BaTiO 3 at x = 0.04–0.07 in bulk ceramics, the results presented herein for the films were almost consistent with the reported results, and thus, the (Bi,Na)TiO 3 –BaTiO 3 films reported herein have a tetragonal structure when x = 0.2–1.0. According to our previous reports, tetragonal (Bi,Na)TiO 3 –BaTiO 3 films ( x = 0.06–1.0) deposited on SrTiO 3 substrates are epitaxial films and were subjected to detailed XRD analysis. 41 , 42 These results show that the volume fraction of the (100) orientation and non-180° domain fraction of the (100)/(001)-oriented ferroelectric films are determined by the thermal strain from the substrate; the thermal expansion coefficient of the SrTiO 3 substrate is 10.9 × 10 –6 /K, 43 which is larger than that of (Bi,Na)TiO 3 –BaTiO 3 (approximately 6 × 10 –6 /K). 33 Thus, the (Bi,Na)TiO 3 –BaTiO 3 film on the SrTiO 3 substrate was confirmed to have a pure (001) orientation and c -domain structure with a polarization axis along the out-of-plane direction, which can be attributed to the in-plane compressive strain experienced during the cooling process after deposition. Conversely, Shimizu et al. reported that (Bi,Na)TiO 3 –BaTiO 3 films deposited on Si substrates have a (100) orientation and a -domain structure with a polarization axis along the in-plane direction due to the in-plane tensile strain because the Si substrate has a thermal expansion coefficient smaller than that of the films, 35 that is, 3.6 × 10 –6 /K. 44 The in-plane lattice parameters were larger than the out-of-plane parameters for all films prepared in this study. These results suggest that the (Bi,Na)TiO 3 –BaTiO 3 films on the Si substrate in this study are considered to be a -domain oriented films for all tetragonal compositions. Figure 3 shows the measured electrical properties. Figure 3 (a,b) illustrates the polarization–electric ( P – E ) curves measured at 10 kHz with various amplitudes of the triangular wave for (1– x )(Bi,Na)TiO 3 – x BaTiO 3 films at (a) x = 0.06 and (b) x = 0.2, respectively, where the amplitudes increased sequentially. The measurement was conducted on a pristine electrode without any applied electric field. For these two compositions, the remanent polarization ( P r ) was well saturated above a high electric field amplitude of 200 kV/cm; however, the manner of saturation was different, particularly at a low electric field amplitude of <150 kV/cm. Small loops were observed at small amplitudes compared with those at large amplitudes for films with x = 0.2. Figure 3 (c,d) shows the P r value as a function of the amplitude for films with x = 0.06 and 0.2, respectively. These measurements were performed twice, from low to high electric fields, for each composition film, and the results for the first and second sweeps are indicated using black squares and red circles, respectively. The black-circle plots in Figure 3 (c) indicate that the P r value shows similar behavior in the first and second cycles for the film with x = 0.06, an increase with increasing electric field amplitude, and saturation above the coercive electric field, in agreement with the gradual change in the P – E loop in Figure 3 (a). This behavior is typical of ferroelectric materials. Conversely, as represented by the circles in Figure 3 (d), the P r value of the film with x = 0.2 tends to saturate against the amplitude twice; the first saturation occurs at a relatively low value below approximately 150 kV/cm, and the second saturation occurs above 200 kV/cm through a rapid increase at 160–200 kV/cm. 30 The value after the second saturation is 1.5–2 times larger than that after the first saturation. No abrupt increase in the P r value was observed during the second cycle. Consequently, the P r value observed in the second cycle was larger than that in the first cycle, as shown in Figure 3 (d), at low-field amplitudes such as 120 kV/cm. The measured amplitude dependencies of the P r for all (1– x )(Bi,Na)TiO 3 – x BaTiO 3 ( x = 0.06–1.0) films in the first and second cycles are also plotted in Figure S2(a,b) . Two-step increments in P r are observed for films with high tetragonality in Figure 2 (b), that is, those with x = 0.2, 0.3, and 0.5. Figure 4 (a) illustrates the P – E relationships measured at 10 kHz and the electric field amplitude of 250 kV/cm from the second cycle of sweep-up for (1– x )(Bi,Na)TiO 3 – x BaTiO 3 ( x = 0.06–1.0) films. Clear hysteresis loops originating from ferroelectricity were obtained for all films. In addition, Figure 4 (b) shows the unipolar strain–electrical field ( S – E ) curves measured at 10 kHz and an amplitude of +150 kV/cm after the poling treatment with an amplitude of +250 kV/cm. The composition dependence of P r and piezoelectric properties, that is, the longitudinal effective piezoelectric response d 33.f , at an applied electric field amplitude of 150 kV/cm is presented in Figure 4 (c,d), respectively. In this study, d 33.f was defined as S max / E max , where S max and E max are the maximum strain and electric field, respectively. In Figure 4 (c,d), the circles represent P r and d 33.f , at a 150 kV/cm electric field amplitude after poling at +150 kV/cm, while the squares represent data after poling at +250 kV/cm, that is, the circles and squares in Figure 4 (c) correspond to the P r values of the first and second cycles at a 150 kV/cm amplitude, respectively, presented in Figure S3 . In Figure 4 (c), the composition dependence of the P r value shows a continuous decrease as the x value deviates from the MPB composition, x ≈ 0.06, for both the first and second cycles. The P r of the films was smaller than those reported for ceramics and c -axis oriented films on SrTiO 3 substrates. 39 , 41 This is mainly due to the a -axis orientation of these films and suppressed tetragonality in comparison with bulk ceramics, as shown in Figure 2 (b). 39 Furthermore, comparing the data for the first and second cycles of sweep-up in Figure 4 (c) revealed that the films with x = 0.2, 0.3, and 0.5 exhibit a “two-step increase” in P r value, as confirmed from the data presented in Figures 3 and S2 . These films exhibit relatively high tetragonality values, as shown in Figure 2 (b). In bulk ceramics, it has been reported that tetragonal and rhombohedral phases coexist at x = 0.05–0.07. 28 In the present study, the remanent polarization value of the film with x = 0.06 is larger than that of the tetragonal films with x = 0.2–1.0 and the a -domain as a majority orientation, suggesting that the film with x = 0.06 is possible to include a rhombohedral phase that has a [111] polar axis. The observed tetragonality (axial ratio) of almost unity also supports the existence of a rhombohedral phase in the film with x = 0.06. In Figure 4 (d), the composition dependence of d 33,f in the first cycle exhibits a trend similar to that of the ceramics, where the values decreased with the x value departing from the MPB composition, x ≈ 0.06. Conversely, in the second cycle, films with compositions of x = 0.2–0.5, where a two-step increase in P r was observed in Figure S2(b) , showed high d 33,f values beyond 220 pm/V. These values surpass the results for the film at x = 0.06, the composition closest to the MPB, and those previously reported for ceramics at x = 0.06 ( d 33,f ≈ 140 pm/V) near MPB composition. 45 In short, large d 33.f above 220 pm/V was obtained over a wide composition range of 30 atom % for tetragonal BNT-BT films far from the MPB composition and it exceeds the previously reported value for bulk ceramics with MPB composition, as expected. Here, Figure S4(a–e) shows the atomic force microscopy (AFM) images of the films with x = 0.06–1.0. The randomly arranged and uniformly shaped grains were observed for all films. These results may correspond to the in-plane polycrystalline characteristics of the films. The average roughness ( R a ) plotted against composition is shown in Figure S5 . As shown in Figures 4 (d) and S5 , these results show no strong correlation between the composition dependences of R a and d 33,f . Owing to the large thicknesses of the films, the misfit strains are considered to be almost relaxed during film deposition, which is introduced by the difference in the lattice parameters between the film and the substrate. 46 , 47 Shimizu et al. explained that the release of tensile strain accumulated during cooling after deposition at the Curie temperature resulted in the formation of BNT-BT films with a -domain dominant structures having both a - and c -axes along the in-plane direction deposited on a Si substrate with a low thermal expansion coefficient. 29 , 35 In contrast, compressive strain is generated upon cooling below T C after film deposition, mainly owing to expansion in the c -axis domain structure along the in-plane direction. This strain is relieved by an increase in the domain wall owing to an external force, such as the application of an electric field, and P r , namely, the out-of-plane polarization component, increases as shown in Figure 3 (d). This “two-step increase” of the P r value appears to begin by applying an electric field of about 150 kV/cm and saturate at about 180 kV/cm. These results suggest that the domain wall remained after the removal of the electric field, and applying a small electric field is sufficient to activate wall motion. Therefore, the domain could be reversibly moved under an applied electric field, thereby enhancing the piezoelectric response. The in situ XRD was performed under an applied electric field to ascertain the mechanism of the large piezoresponse. Figure 5 compares the XRD patterns of the (1– x )(Bi,Na)TiO 3 – x BaTiO 3 films before (black lines), under (blue lines), and after (red lines) the application of a + 150 kV/cm electric field for films with (a) x = 0.06 and (b) x = 0.2. As shown in Figure 5 (a,b), the XRD patterns before and after the application of the electric field are almost the same for both films with x = 0.06 and 0.2, and thus, the crystal structure of the films are the same, with the in-plane polarized a -domain as the main orientation. In Figure 5 (a), only the {200} peak appears in the diffraction pattern, even under the application of an electric field, and it hardly changes compared with the diffraction peaks before and after applying the electric field for the film with a x = 0.06 film. However, additional peaks located at an angle lower than the original position were observed under an electric field for the film with x = 0.2, as shown in Figure 5 (b). Considering that the original peak position was identified as a {200} peak originating from the in-plane polarized domain, the novel peak can be identified as the {002} peak from the out-of-plane polarized domain. This suggests a change in the polarization direction when an electric field is applied; that is, the a -domain switches to the c -domain by applying an electric field. When the electric field was turned off, the XRD pattern was almost the same as that before the application of the electric field, suggesting that reversible domain switching occurred due to the application of the electric field. The piezoelectric response d 33.f from the domain switching can be estimated by using the peak-fitting method. The estimated volume fraction of the c -domain was approximately 33% under the electric field, as shown in Figure S6 and Table S1 . On the basis of this change in the volume fraction, d 33,f ≈ 152 pm/V was calculated using the following equation 35 where E , V c , c , and a represent the electric field, c -domain volume fraction, c -axis lattice parameter, and a -axis lattice parameter, respectively. The subscript “0” denotes no application of an electric field. The angle of incidence of the X-rays was not 90°, causing the beam to spread over the electrode of interest in an elliptical shape with a long diameter of 400 μm and a short diameter of 100 μm. Therefore, even at the best beam position, the XRD pattern comprised approximately 42% diffraction from outside the electrode, resulting in an underestimation of the d 33,f value. If the electrode diameter completely covered the beam diameter, d 33,f was 260 pm/V, which almost agreed with the results shown in Figure 4 (b,d). This result suggests that domain switching is the origin of the large piezoelectric response of the films with x = 0.2, as shown in Figure 4 , similar to that of films with x = 0.3, as demonstrated in our previous study. 35 This can be explained by the similar tetragonality of the two films, as shown in Figure 2 (b). Finally, the transverse piezoelectric coefficients, e 31,f , which are widely used to characterize the piezoelectric properties of thin films, were measured to compare the piezoelectric properties of films prepared on other substrates and films of other materials with x = 0.2. The e 31,f were calculated from the curvature of the cantilever, owing to the actuation of the piezoelectric film on the beam. 48 , 49 Figure 6 (a) displays the estimated e 31,f versus the measured voltage, with closed and open squares representing the results for the Si and SrTiO 3 substrates for films with x = 0.2, respectively, while closed and open triangles represent the results for the Si and SrTiO 3 substrates for films with x = 0.3. The films on SrTiO 3 show e 31,f values of 4–5 C/m 2 for both compositions, which are comparable to those of other perovskite epitaxial films. 15 In contrast, the films on the Si substrate exhibited a high e 31,f value of 19 C/m 2 for both film compositions. 35 , 50 No clear “two-step increase” was detected in the dependence of e 31,f on an applied electric field. This may be due to the piezoelectric signal at the first step being too small and/or the differences in the measurement frequency and electrode size for the d 33,f and e 31,f measurements. However, as shown in Figures 4 and 6 , both d 33,f and e 31,f show large values, suggesting that domain switching was already completed when the 20 V pulsed wave was applied. Figure 6 (b) presents a comparison of the e 31,f values obtained in this study with those of previous studies on Pb-based perovskite films and Pb-free materials. 15 The e 31,f values of the (1– x )(Bi,Na)TiO 3 – x BaTiO 3 ( x = 0.2 and 0.3 35 ) films are the highest among the Pb-free materials, surpassing those of most Pb-based films, except for the relaxor Pb(Mg,Nb)O 3 –PbTiO 3 and Nb-doped Pb(Zr,Ti)O 3 films. More importantly, an e 31,f value of 19 C/m 2 was obtained for a composition range of at least 10 atom % range (for both of x = 0.2 and 0.3), which is much wider than a morphotropic phase boundary with a limited composition range of 1–2%. These findings imply that an improved piezoelectric response using domain switching can pave the way for practical applications in various devices, including those based on MEMS technology, owing to the large piezoresponse over a wide composition range.
Conclusions Tetragonal (1– x )(Bi,Na)TiO 3 – x BaTiO 3 films were deposited on Si substrates over a wide composition range ( x = 0.06, 0.2, 0.3, 0.5, and 1.0), and the polarization axis was principally aligned in the in-plane direction owing to the tensile thermal strain from the substrate. XRD measurements revealed a trend of composition dependency for the tetragonality, similar to that for bulk ceramics, with a maximum value at approximately x = 0.2–0.5; however, its absolute value was smaller than that reported for bulk ceramics. For the high tetragonality composition region, we observed a “two-step increase” in remanent polarization due to domain rearrangement under a high-field amplitude and an exceptional piezoelectric response ( d 33,f > ∼200 pm/V), surpassing reported values of 30% for bulk ceramics in the composition range 0.2–0.5. In situ XRD analysis confirmed domain switching from in-plane to out-of-plane polarization for x = 0.2. e 31,f of 19 C/m 2 was observed for films in the 10% composition range of x = 0.2–0.3 using cantilever structures; this e 31,f was almost the highest value in Pb-free materials and comparable to that of Pb-based ones. These results demonstrate good piezoelectric properties over a compositional range several times broader than the limited MPB range of 1–2%. The innovative concept of reversible domain switching facilitates improved piezoelectric properties over an extended composition range, in a departure from conventional MPB compositions. We believe that the achievement of high environmental sustainability and composition insensitivity in lead-free piezoelectric materials will inspire further exploration of piezoelectric materials, which have been dominated by Pb-based materials near MPB composition for the last 70 years.
Tetragonal (1– x )(Bi,Na)TiO 3 – x BaTiO 3 films exhibit enhanced piezoelectric properties due to domain switching over a wide composition range. These properties were observed over a significantly wider composition range than the morphotropic phase boundary (MPB), which typically has a limited composition range of 1–2%. The polarization axis was found to be along the in-plane direction for the tetragonal composition range x = 0.06–1.0, attributed to the tensile thermal strain from the substrate during cooling after the film formation. A “two-step increase” in remanent polarization against an applied maximum electric field was observed at the high-field region due to the domain switching, and a very high piezoelectric response (effective d 33 value, denoted as d 33,f ) over 220 pm/V was achieved for a wide composition range of x = 0.2–0.5 with high tetragonality, exceeding previously reported values for bulk ceramics. Moreover, a transverse piezoelectric coefficient, e 31,f , of 19 C/m 2 measured using a cantilever structure was obtained for a composition range of at least 10 atom % (for both x = 0.2 and 0.3). This value is the highest reported for Pb-free piezoelectric thin films and is comparable to the best data for Pb-based thin films. Reversible domain switching eliminates the need for conventional MPB compositions, allowing an improvement in the piezoelectric properties over a wider composition range. This strategy could provide a guideline for the development of environmentally acceptable lead-free piezoelectric films with composition-insensitive piezoelectric performance to replace Pb-based materials with MPB composition, such as PZT.
Experimental Section Film Preparation Approximately 2.0 μm thick (1– x )(Bi,Na)TiO 3 – x BaTiO 3 films with x = 0.06–1.0 were deposited by pulsed laser deposition (PLD) at 675 °C for about 2 h under varying the O 2 pressure (200 mTorr) using a KrF excimer laser (λ = 248 nm and power of 170 mJ). The targets used for the deposition were prepared via a solid-state reaction of Bi 2 O 3 , Na 2 CO 3 , BaCO 3 , and TiO 2 powders, with an excess of 20 mol % bismuth oxide and sodium carbonate to compensate for the high volatility of Bi and Na, similar to the process used for sintered ceramics and other film-deposition processes. (Bi,Na)TiO 3 –BaTiO 3 films were deposited on (100)-oriented Si single-crystal substrates covered with a Pt electrode, Pt/TiO 2 /SiO x /(100)Si. To deposit {100}-out-of-plane-oriented textured films, a LaNiO 3 buffer layer, which can achieve {100}-preferred-oriented textured films independent of the kinds of substrate, 51 , 52 was inserted between the (La 0.5 Sr 0.5 )CoO 3 electrode layer and the (111)Pt/TiO 2 /SiO x /Si substrates. LaNiO 3 films were prepared by RF sputtering at 350 °C and subsequent heat treatment at 800 °C, showing the (100) c orientation (the subscript c indicates pseudocubic cells). (La 0.5 Sr 0.5 )CoO 3 films were prepared by using PLD to ensure sufficient conductivity of the electrode. XRD Analysis The crystal structures of the prepared films were analyzed using X-ray diffraction (XRD; X’Pert-MRD, Philips, and SmartLab, Rigaku, λ = 0.154 nm). The ω - 2θ scans were carried out to estimate the lattice parameters by performing 2θ scans while changing the incident angle (ω). The 2θ position of Si (lattice parameter: 5.43) was used as a reference (Coll. Code: 51,688). The film thickness was estimated using wavelength-dispersive X-ray fluorescence (WD-XRF; Axios PW4400/40, PANalytical), and the results were compared to those of a reference sample. The crystal structures of the films under an applied electric field were investigated using a microfocus X-ray diffraction (XRD) setup with a 2D detector (Bruker AXS D8 DISCOVER) by focusing X-rays on the Pt-top electrodes. X-rays were focused onto a Pt-top electrode with φ = 200 μm, to which an electric field of 250 kV/cm amplitude was applied, and diffraction patterns were collected by a two-dimensional detector. A collimator with a pinhole with a 100 μm diameter was used. Microstructure Analysis The surface morphology and cross-sectional microstructure were observed by using a field emission scanning electron microscope (FESEM; Hitachi, S-4800) and an atomic force microscope (AFM) (SPA400, SII). Electrical Characterization Pt-top electrodes with φ = 200 μm were deposited on (Bi,Na)TiO 3 –BaTiO 3 films via evaporation to measure electric and piezoelectric properties. The ferroelectricity at room temperature for the Pt/(Bi,Na)TiO 3 –BaTiO 3 /(La 0.5 Sr 0.5 )CoO 3 capacitor was measured by using a ferroelectric tester (TOYO, FCE-1A) at 10 kHz. The electric-field-induced strain was recorded using laser Doppler vibrometers (LDV, Polytec, NLV-2500-5) simultaneously with the P – E measurements. e 31,f was determined from the tip displacement of the cantilever using the LDV. The sample length and thickness of the cantilevers are 11.5 mm and 780 μm for the Si substrate and 11.9 mm and 500 μm for the SrTiO 3 substrate, respectively. The tip displacement was produced by applying a sinusoidal voltage with various amplitudes and a bias of −10 V, which had been polled with a 20 V pulse wave.
Data Availability Statement The data supporting the findings of this study are available from the corresponding author upon reasonable request. Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c13302 . Out-of-plane and in-plane XRD patterns, SEM images, surface morphology, and remanent polarization as a function of measured maximum amplitude ( PDF ) Supplementary Material Author Contributions The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. The authors declare no competing financial interest. Acknowledgments This work was partly supported by the Element Strategy Initiative to Form a Core Research Center of the Ministry of Education, Culture, Sports, Science, Technology of Japan (MEXT) Grant Number JPMXP0112101001, MEXT Program: Data Creation and Utilization Type Material Research and Development Grant No. JPMXP1122683430, JSPS KAKENHI Grant Numbers 23KJ0903 (KI) and 19K15288 (TS), and MEXT KAKENHI Grant Number 20H05185 (TS).
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 28; 16(1):1308-1316
oa_package/74/80/PMC10788825.tar.gz
PMC10788826
38147588
Introduction Metal–organic frameworks (MOFs) are well-known for their nanoporous structure and high internal surface area, which led to their applications in gas storage and separation. 1 − 6 Since most MOFs are not electrically conductive, intensive research has been conducted to develop conductive MOFs by introducing free charge carriers and creating low-energy transfer pathways in MOF structures. 7 , 8 However, such strategies limit the variety of the MOF components and structure. Hybridizing MOFs with conductive carbon materials, such as single-walled carbon nanotubes (SWCNTs), multiwalled carbon nanotubes (MWCNTs), graphene, and reduced graphene oxide (rGO), offers a practical method to enable electrical applications for many traditional MOFs. This approach facilitated the evolvement of multiple nonconductive MOFs in electrical sensing, electrochemistry, and electrocatalysis. 9 − 14 It is worth noticing that among the reported SWCNT/MOF hybrid material, most of the research centered around their application in electrochemistry. 15 − 19 The utilization of SWCNT/MOF hybrid materials in electrical sensor applications was underexplored, despite the promising characteristics of SWCNTs. SWCNTs offer a compelling combination of attributes essential for sensor development, including large surface-to-volume ratio, long-range conductivity, and inherent semiconductive properties. 20 − 22 To leverage these advantages, SWCNT-based sensors typically incorporate selective chemical layers on the nanotube surfaces to enhance their specificity. These layers mostly include components such as metal nanoparticles, polymers, noncovalently attached small molecules, and covalent functionalizations. 23 Utilizing MOFs as the selective layer is a novel and rational approach. The diverse functionalities of MOF ligands offer significant potential for chemical specificity, and the porosity of MOFs enables access to the surface of nanotubes, which is the most sensitive part of SWCNTs. Previous work from our group have shown improved chemiresistive gas sensing of ethanol, dimethyl methylphosphonate (DMMP), and H 2 with SWCNT@ZIF-8, 24 SWCNT@UiO-66-NH 2 , 25 and SWCNT@Pd@HKUST-1 9 composites, respectively. Furthermore, our recent report showcased the application of the SWCNT@MOF in liquid-gated FET devices for the first time. A series of carbohydrates with different molecular sizes was discriminated using FET devices made up of SWCNT@Cu 3 (HHTP) 2 . 26 The sensing mechanism of the SWCNT@MOF FET devices is unique. Unlike conventional FET SWCNT sensors that rely on the electrostatic effect of the analyte to alter CNT conductance, SWCNT@MOF FET sensing operates by controlling gate capacitance through the inhibition of ion transportation to SWCNT surfaces, accomplished by the analyte molecules obstructing the MOF channels. This sensing mechanism was validated using a variety of carbohydrates with varying sizes that occupied the pores of Cu 3 (HHTP) 2 MOF. As the pores of Cu 3 (HHTP) 2 were most tightly packed by small-sized glucose molecules, largest current decrease was observed in the SWCNT@Cu 3 (HHTP) 2 FET devices. 26 In this work, we aim to explore the detection of a single analyte molecule using various SWCNT@MOF FET devices, each equipped with unique pore sizes. For this purpose, the UiO-6x MOF series is particularly suitable. The UiO MOF series exhibits uniform structure, where a single zirconium oxide-cluster is ideally interconnected by 12 dicarboxylic ligands. 27 By incorporation of benzene rings into the ligand, the UiO MOFs can be tailored to feature varied pore sizes while maintaining a consistent chemical environment within the channels. Additionally, the benzene ring enables numerous opportunities for functionalization, wherein functional groups can be introduced to slightly adjust both the pore size and chemical environment within the channels. 28 , 29 Norfentanyl (NF) is the major metabolite from fentanyl, a potent and acute synthetic opioid. 30 Detection of NF has earned huge research interests since illicit usage of fentanyl caused surge in overdose death in the United States. 31 Research showed that NF had a longer detectable window after fentanyl administration and it presented in higher concentration compared to fentanyl in urine sample, which made NF a suitable marker to track fentanyl exposure. 32 , 33 Detection of NF primarily relies on mass spectrometry-based techniques, 34 − 36 which demands expensive instrumentation and specialized training. Traditional electrochemistry methods have not proven successful due to the redox-inactive nature of NF. 37 Nevertheless, there have been a few successful demonstrations of electrical NF sensors achieved through the functionalization of NF biorecognition elements. Kumar et al. demonstrated the detection of three different opioid metabolites, including NF, using aptamer-functionalized graphene FET sensors (G-FET) with 2-digit pg/mL limit of detection. 38 Shao et al. achieved detection at fg/mL level with a semiconducting-SWCNT-based FET biosensor (sc-SWCNT-FET) functionalized with NF antibody. 39 However, it is important to note that these biorecognition elements require specific storage conditions, limiting their application in out-of-the-box and on-site scenarios. On the other hand, UiO-MOFs exhibit outstanding mechanical, thermal, and chemical stability. 27 , 40 , 41 In tests for long-term stability, UiO-67 remained stable for up to 2 months when stored in water, 42 while UiO-66 demonstrated remarkable prolonged stability, with no degradation observed over a period of 12 months. 43 Reports also indicated that UiO-67 remained undegraded when kept dry, 44 and UiO-66 exhibited no degradation even after exposure to high humidity for 28 days. 45 This exceptional stability of UiO-MOFs is particularly advantageous for sensors intended for use in nonlaboratory environments. With pore size close to the NF molecule, UiO-MOFs are promising FET sensor materials for detecting NF. Herein, we studied the interaction between NF and SWCNT@UiO-MOF FET devices and report a novel size-based detection method for NF without functionalization with biorecognition elements.
Results and Discussion Four different UiO-MOFs: UiO-66, UiO-66-NH 2 , UiO-67, and UiO-67-CH 3 , were hybridized with SWCNT. Composites were prepared by growing the MOF on the surface of SWCNT. As shown in Scheme 1 , zirconium oxide-cluster and ligands tend to be adsorbed around the carboxylic functionalities on SWCNT, promoting heterogeneous MOF growth. 25 As the TEM images indicated in Figure 1 a–d, the synthesized composites shared a similar “beads-on-a-string” morphology. Composites were deposited onto prefabricated interdigitated electrodes ( Figure S1 ) via dielectrophoresis (DEP) to fabricate FET sensors. Scanning electron microscopy (SEM) imaging ( Figure 1 e–h) showed that the composites formed networks to bridge the electrodes. In Figure 1 i–l, the powder X-ray diffraction (XRD) patterns of the composites matched with the simulated MOF patterns, which indicated the MOF crystal structure in the composites. Comparison with pure MOFs ( Figure S2 ) revealed that the morphology and XRD patterns of SWCNT@MOF composites remained largely unchanged, suggesting that the fundamental chemistry of the MOF component was preserved during the one-pot synthesis with SWCNTs. N 2 sorption isotherms at 77 K were collected for the SWCNT@UiO-MOF samples ( Figure S3 a–d). From these data, we calculated Brunauer–Emmett–Teller (BET) surface area (SA) values of 1532 m 2 /g for SWCNT@UiO-66, 1330 m 2 /g for SWCNT@UiO-66-NH 2 , 2603 m 2 /g for SWCNT@UiO-67, and 2284 m 2 /g for SWCNT@UiO-67-CH 3 . Compared to SWCNT@UiO-66, SWCNT@UiO-66-NH 2 possesses a smaller BET SA, likely due to the addition of the –NH 2 functional group. Similarly, a lower BET SA is also observed for SWCNT@UiO-67-CH 3 compared to SWCNT@UiO-67 because of the presence of –CH 3. The calculated BET SAs for the composite materials are uniformly higher than the MOFs alone ( Figure S2 ) and are significantly higher than what would be expected for the pure, defect-free MOFs. 46 We note that it is not uncommon for UiO MOFs to exhibit defects, 46 however, and that the MOF and composite syntheses reported herein likely result in some level of defects. Similar BET SA increases were also observed in other MOFs after hybridization with SWCNTs. 17 , 26 , 47 , 48 The precise mechanism behind these BET surface area increases is a complex matter beyond the scope of the present work. Pore sizes within the UiO-MOF structures were calculated using Zeo++ software ( Table 1 ). The methodology involved employing Voronoi tessellation, a technique implemented in Zeo++, to delineate and characterize the void spaces within the MOF framework. 49 Voronoi tessellation partitioned the space surrounding each atom into Voronoi cells, and these cells were analyzed to identify the interconnected pore channels. Pore sizes were subsequently calculated by measuring the diameters or radii of these channels. Particularly, the largest pore diameter (or the largest included sphere) corresponds to the greatest distance assigned to the Voronoi nodes. The procedure involves scanning all Voronoi nodes within a periodic unit cell of the structure and identifying the node with the largest separation from its neighboring atom. Examination of the Voronoi network also yields insights into the dimensions of the largest spherical probe capable of freely traversing the void space freely. This analysis entails an exploration of the connectivity among Voronoi nodes. To illustrate, the determination of the diameter of the largest spherical probe capable of transiting between two nodes necessitates the identification of a path within the Voronoi network that traverses nodes and edges distinguished by maximal distances from adjacent atoms. This path encompasses the regions within the void space distinguished by their widest apertures. Liquid-gated FET measurements were performed with composites and SWCNT devices with phosphate-buffered saline (PBS) as the gating liquid to ensure identical ionic strength. FET transfer characteristics, i.e., source-drain current versus applied gate voltage (I–V g ), of all four composites and SWCNT devices are shown in Figure 2 a. The curves of four composites all shifted toward negative voltage, suggesting the same doping effect of the UiO MOF. In Figure 2 b, the transfer characteristics were plotted on a logarithmic scale. Compared to SWCNT devices, all composites showed lower off-current, which can be ascribed to the coverage of SWCNT surfaces by MOF crystals. This phenomenon can be elucidated by understanding the DEP process, which exerts forces on SWCNT strands rather than MOF particles. When the same DEP condition was applied to the composites, less material was attracted onto the electrode compared to bare SWCNTs. This resulted in a reduced availability of SWCNTs for network formation, consequently making material deposition less effective for the composites. It was noticeable that the on–off ratio of SWCNT@UiO-66 composites was slightly higher than that of bare SWCNT. This phenomenon was also observed in our previous studies with SWCNT@MOF composites, where MOF growth on carbon nanotubes suppresses the metallic aspect of the deposited nanotubes by decreasing the metallic junctions in nanotube network. 24 − 26 Figure 2 c,d shows calibration plots depicting the responses of SWCNT devices and SWCNT@UiO-MOF composite devices when exposed to NF in PBS. The relative response was calculated by normalizing the decrease in the source-drain current at a −0.5 V gate voltage ( Figure S4 a). Compared to bare SWCNT devices, SWCNT@UiO-67 devices exhibited enhanced responses, while SWCNT@UiO-66 devices demonstrated decreased responses ( Figure 2 c). The varied responses of SWCNT@UiO-66 and SWCNT@UiO-67 devices were attributed to the pore size difference between UiO-66 and UiO-67. NF molecules were estimated to be 9.6 Å in size ( Figure S5 ). As shown in Table 1 , the pore size of UiO-66 is 8.6 Å, which is too small for the NF molecule to enter ( Figure 3 a). However, in the case of UiO-67, NF molecules can enter the MOF pores, as depicted in Figure 3 b,d. Figure 3 d presents a snapshot from a molecular simulation depicting NF molecules residing in the center of the UiO-67 pores. For a dynamic view of NF molecules moving within the pores, a video is available in the Supporting Information . In a liquid-gated FET, a SWCNT relies on the gate-voltage-driven diffusion of ions to modulate its conductance. When NF molecules resided inside the channel of UiO-67, ion diffusion was partially obstructed, leading to a decrease in the gate capacitance of the FET device. Consequently, there was a significant current reduction in the p-branch of the I – V g curve as depicted in Figure 4 c. For bare SWCNT devices, decrease in the current resulted from nonpreferential adsorption of NF on SWCNT surfaces. These adsorbed molecules can also hinder ion diffusion, causing a decrease in the current. However, this effect was less prominent compared with the blockage inside MOF channels. In the case of the SWCNT@UiO-66 devices, NF molecules cannot enter UiO-66. Only the exposed SWCNTs in the SWCNT@UiO-66 devices were subjected to NF molecules. As a result, the influence of NF on SWCNT@UiO-66 devices was less pronounced compared to bare SWCNT devices ( Figure 4 b). While it is true that adsorbed NF molecules may induce changes in SWCNT conductance through an electrostatic effect, this effect was universal throughout all devices. Therefore, this factor was not discussed when comparing the differences in responses. UiO-66 and UiO-67 can be functionalized on the ligand. With one substitution on the benzene ring, the pore size can be slightly decreased ( Table 1 ). UiO-66-NH 2 and UiO-67-CH 3 have an amine and a methyl group on their ligands, respectively. Their composites with the SWCNT were also tested with NF ( Figure 2 d). SWCNT@UiO-66-NH 2 composite devices showed slightly lower responses compared to those of SWCNT@UiO-66 devices. SWCNT@UiO-66 devices have already exhibited reduced responses compared to bare SWCNT devices, primarily due to the prevention of interaction between NF and SWCNT. Further decreasing the MOF pore size should not significantly alter this outcome. The variation in responses was attributed to the slight difference in the MOF coverage between SWCNT@UiO-66 and SWCNT@UiO-66-NH 2 composites. SWCNT@UiO-67-CH 3 composite devices showed increased responses compared to SWCNT devices, but the enhancement of responses was smaller than that of the SWCNT@UiO-67 devices. We propose that the decreased pore size in UiO-67-CH 3 increases the steric hindrance felt by NF molecules, which slows their diffusion into the MOF ( Figure 3 c), leaving more channels open for ions to reach the SWCNTs after the devices were incubated for the same time. The stability of the SWCNT@UiO-67 composites was assessed by storing and aging them in ethanol. XRD and SEM images were obtained at two time points: 65 and 226 days. For the material aged for 65 days, no significant decrease in crystallinity was observed in the XRD pattern ( Figure S6 ), and SEM images showed no noticeable changes in morphology ( Figure S7 a). However, in the 226 day sample, a substantial reduction in the XRD peak intensity was evident ( Figure S6 ), and SEM images revealed visible degradation, including surface holes on UiO-67 particles ( Figure S7 b), indicative of structural damage. FET devices were fabricated with 226 day aged composites, and their responses were tested against NF ( Figure S8 ). Despite significant MOF structure degradation, the FET devices still exhibited enhanced responses toward NF. This continued response can be attributed to the presence of remaining crystallized MOF channels within the composites, which NF can still occupy and obstruct ion diffusion. The reduced enhancement was attributed to reduced availability of UiO-67 channels in the degraded composites. Signal saturation was observed around 100 ppb to 1 ppm region, possibly due to the limited amount of MOF pores after degradation. An additional linear region appeared in the ppm range, but due to the increased complexity of the MOF structure after degradation, it was challenging to rationalize this observation within the scope of this study. The stability of the fabricated devices was assessed by comparing the sensor response before and after storage in a drawer under ambient conditions for 36 days. As shown in Figure S8 , there is a negligible difference between the devices tested immediately after fabrication and those tested after storage, indicating the good stability of the sensor devices after fabrication. Metabolites of other controlled substances were also tested to investigate the sensing specificity of the composites. Normorphine (NM), norhydrocodone (NH), and benzoylecgonine (BZ) are major metabolites of morphine, hydrocodone, and cocaine, respectively. These compounds were dissolved in PBS and added to the SWCNT@UiO-67 device with concentrations ranging from 1 ppb to 1 ppm ( Figure S9 ). The sensor devices showed no response to NH and BZ, but showed similar responses toward NM because its size is similar to NF. The responses to other metabolites with similar size to NF proved the sensing mechanism was based on the size of the molecule. When compared with other sensing methods, the limit of detection (LOD) from direct measurement using SWCNT@MOF FET sensors is not the lowest ( Table S1 ). However, SWCNT@MOF FET sensors offer a versatile platform for detecting analytes that are challenging for conventional chemical sensing methods by leveraging size-based detection. By incorporating such sensors into arrays and applying machine learning methods, discernment between chemically similar analytes can be achieved. 50 − 52 Prior work has shown using such methods can lead to improvements in sensitivity and selectivity, 53 − 55 even for materials that were once considered nonspecific. 56 Furthermore, discrimination of analytes in complex chemical environments has been demonstrated. 57 − 59 Our exploration of these SWCNT@MOF FET materials has only just begun, leaving many aspects to investigate, such as the chemical interaction between functional groups inside the MOF channel and analyte molecules, sensor fabrication methods, and optimization of pore size for specific analytes.
Results and Discussion Four different UiO-MOFs: UiO-66, UiO-66-NH 2 , UiO-67, and UiO-67-CH 3 , were hybridized with SWCNT. Composites were prepared by growing the MOF on the surface of SWCNT. As shown in Scheme 1 , zirconium oxide-cluster and ligands tend to be adsorbed around the carboxylic functionalities on SWCNT, promoting heterogeneous MOF growth. 25 As the TEM images indicated in Figure 1 a–d, the synthesized composites shared a similar “beads-on-a-string” morphology. Composites were deposited onto prefabricated interdigitated electrodes ( Figure S1 ) via dielectrophoresis (DEP) to fabricate FET sensors. Scanning electron microscopy (SEM) imaging ( Figure 1 e–h) showed that the composites formed networks to bridge the electrodes. In Figure 1 i–l, the powder X-ray diffraction (XRD) patterns of the composites matched with the simulated MOF patterns, which indicated the MOF crystal structure in the composites. Comparison with pure MOFs ( Figure S2 ) revealed that the morphology and XRD patterns of SWCNT@MOF composites remained largely unchanged, suggesting that the fundamental chemistry of the MOF component was preserved during the one-pot synthesis with SWCNTs. N 2 sorption isotherms at 77 K were collected for the SWCNT@UiO-MOF samples ( Figure S3 a–d). From these data, we calculated Brunauer–Emmett–Teller (BET) surface area (SA) values of 1532 m 2 /g for SWCNT@UiO-66, 1330 m 2 /g for SWCNT@UiO-66-NH 2 , 2603 m 2 /g for SWCNT@UiO-67, and 2284 m 2 /g for SWCNT@UiO-67-CH 3 . Compared to SWCNT@UiO-66, SWCNT@UiO-66-NH 2 possesses a smaller BET SA, likely due to the addition of the –NH 2 functional group. Similarly, a lower BET SA is also observed for SWCNT@UiO-67-CH 3 compared to SWCNT@UiO-67 because of the presence of –CH 3. The calculated BET SAs for the composite materials are uniformly higher than the MOFs alone ( Figure S2 ) and are significantly higher than what would be expected for the pure, defect-free MOFs. 46 We note that it is not uncommon for UiO MOFs to exhibit defects, 46 however, and that the MOF and composite syntheses reported herein likely result in some level of defects. Similar BET SA increases were also observed in other MOFs after hybridization with SWCNTs. 17 , 26 , 47 , 48 The precise mechanism behind these BET surface area increases is a complex matter beyond the scope of the present work. Pore sizes within the UiO-MOF structures were calculated using Zeo++ software ( Table 1 ). The methodology involved employing Voronoi tessellation, a technique implemented in Zeo++, to delineate and characterize the void spaces within the MOF framework. 49 Voronoi tessellation partitioned the space surrounding each atom into Voronoi cells, and these cells were analyzed to identify the interconnected pore channels. Pore sizes were subsequently calculated by measuring the diameters or radii of these channels. Particularly, the largest pore diameter (or the largest included sphere) corresponds to the greatest distance assigned to the Voronoi nodes. The procedure involves scanning all Voronoi nodes within a periodic unit cell of the structure and identifying the node with the largest separation from its neighboring atom. Examination of the Voronoi network also yields insights into the dimensions of the largest spherical probe capable of freely traversing the void space freely. This analysis entails an exploration of the connectivity among Voronoi nodes. To illustrate, the determination of the diameter of the largest spherical probe capable of transiting between two nodes necessitates the identification of a path within the Voronoi network that traverses nodes and edges distinguished by maximal distances from adjacent atoms. This path encompasses the regions within the void space distinguished by their widest apertures. Liquid-gated FET measurements were performed with composites and SWCNT devices with phosphate-buffered saline (PBS) as the gating liquid to ensure identical ionic strength. FET transfer characteristics, i.e., source-drain current versus applied gate voltage (I–V g ), of all four composites and SWCNT devices are shown in Figure 2 a. The curves of four composites all shifted toward negative voltage, suggesting the same doping effect of the UiO MOF. In Figure 2 b, the transfer characteristics were plotted on a logarithmic scale. Compared to SWCNT devices, all composites showed lower off-current, which can be ascribed to the coverage of SWCNT surfaces by MOF crystals. This phenomenon can be elucidated by understanding the DEP process, which exerts forces on SWCNT strands rather than MOF particles. When the same DEP condition was applied to the composites, less material was attracted onto the electrode compared to bare SWCNTs. This resulted in a reduced availability of SWCNTs for network formation, consequently making material deposition less effective for the composites. It was noticeable that the on–off ratio of SWCNT@UiO-66 composites was slightly higher than that of bare SWCNT. This phenomenon was also observed in our previous studies with SWCNT@MOF composites, where MOF growth on carbon nanotubes suppresses the metallic aspect of the deposited nanotubes by decreasing the metallic junctions in nanotube network. 24 − 26 Figure 2 c,d shows calibration plots depicting the responses of SWCNT devices and SWCNT@UiO-MOF composite devices when exposed to NF in PBS. The relative response was calculated by normalizing the decrease in the source-drain current at a −0.5 V gate voltage ( Figure S4 a). Compared to bare SWCNT devices, SWCNT@UiO-67 devices exhibited enhanced responses, while SWCNT@UiO-66 devices demonstrated decreased responses ( Figure 2 c). The varied responses of SWCNT@UiO-66 and SWCNT@UiO-67 devices were attributed to the pore size difference between UiO-66 and UiO-67. NF molecules were estimated to be 9.6 Å in size ( Figure S5 ). As shown in Table 1 , the pore size of UiO-66 is 8.6 Å, which is too small for the NF molecule to enter ( Figure 3 a). However, in the case of UiO-67, NF molecules can enter the MOF pores, as depicted in Figure 3 b,d. Figure 3 d presents a snapshot from a molecular simulation depicting NF molecules residing in the center of the UiO-67 pores. For a dynamic view of NF molecules moving within the pores, a video is available in the Supporting Information . In a liquid-gated FET, a SWCNT relies on the gate-voltage-driven diffusion of ions to modulate its conductance. When NF molecules resided inside the channel of UiO-67, ion diffusion was partially obstructed, leading to a decrease in the gate capacitance of the FET device. Consequently, there was a significant current reduction in the p-branch of the I – V g curve as depicted in Figure 4 c. For bare SWCNT devices, decrease in the current resulted from nonpreferential adsorption of NF on SWCNT surfaces. These adsorbed molecules can also hinder ion diffusion, causing a decrease in the current. However, this effect was less prominent compared with the blockage inside MOF channels. In the case of the SWCNT@UiO-66 devices, NF molecules cannot enter UiO-66. Only the exposed SWCNTs in the SWCNT@UiO-66 devices were subjected to NF molecules. As a result, the influence of NF on SWCNT@UiO-66 devices was less pronounced compared to bare SWCNT devices ( Figure 4 b). While it is true that adsorbed NF molecules may induce changes in SWCNT conductance through an electrostatic effect, this effect was universal throughout all devices. Therefore, this factor was not discussed when comparing the differences in responses. UiO-66 and UiO-67 can be functionalized on the ligand. With one substitution on the benzene ring, the pore size can be slightly decreased ( Table 1 ). UiO-66-NH 2 and UiO-67-CH 3 have an amine and a methyl group on their ligands, respectively. Their composites with the SWCNT were also tested with NF ( Figure 2 d). SWCNT@UiO-66-NH 2 composite devices showed slightly lower responses compared to those of SWCNT@UiO-66 devices. SWCNT@UiO-66 devices have already exhibited reduced responses compared to bare SWCNT devices, primarily due to the prevention of interaction between NF and SWCNT. Further decreasing the MOF pore size should not significantly alter this outcome. The variation in responses was attributed to the slight difference in the MOF coverage between SWCNT@UiO-66 and SWCNT@UiO-66-NH 2 composites. SWCNT@UiO-67-CH 3 composite devices showed increased responses compared to SWCNT devices, but the enhancement of responses was smaller than that of the SWCNT@UiO-67 devices. We propose that the decreased pore size in UiO-67-CH 3 increases the steric hindrance felt by NF molecules, which slows their diffusion into the MOF ( Figure 3 c), leaving more channels open for ions to reach the SWCNTs after the devices were incubated for the same time. The stability of the SWCNT@UiO-67 composites was assessed by storing and aging them in ethanol. XRD and SEM images were obtained at two time points: 65 and 226 days. For the material aged for 65 days, no significant decrease in crystallinity was observed in the XRD pattern ( Figure S6 ), and SEM images showed no noticeable changes in morphology ( Figure S7 a). However, in the 226 day sample, a substantial reduction in the XRD peak intensity was evident ( Figure S6 ), and SEM images revealed visible degradation, including surface holes on UiO-67 particles ( Figure S7 b), indicative of structural damage. FET devices were fabricated with 226 day aged composites, and their responses were tested against NF ( Figure S8 ). Despite significant MOF structure degradation, the FET devices still exhibited enhanced responses toward NF. This continued response can be attributed to the presence of remaining crystallized MOF channels within the composites, which NF can still occupy and obstruct ion diffusion. The reduced enhancement was attributed to reduced availability of UiO-67 channels in the degraded composites. Signal saturation was observed around 100 ppb to 1 ppm region, possibly due to the limited amount of MOF pores after degradation. An additional linear region appeared in the ppm range, but due to the increased complexity of the MOF structure after degradation, it was challenging to rationalize this observation within the scope of this study. The stability of the fabricated devices was assessed by comparing the sensor response before and after storage in a drawer under ambient conditions for 36 days. As shown in Figure S8 , there is a negligible difference between the devices tested immediately after fabrication and those tested after storage, indicating the good stability of the sensor devices after fabrication. Metabolites of other controlled substances were also tested to investigate the sensing specificity of the composites. Normorphine (NM), norhydrocodone (NH), and benzoylecgonine (BZ) are major metabolites of morphine, hydrocodone, and cocaine, respectively. These compounds were dissolved in PBS and added to the SWCNT@UiO-67 device with concentrations ranging from 1 ppb to 1 ppm ( Figure S9 ). The sensor devices showed no response to NH and BZ, but showed similar responses toward NM because its size is similar to NF. The responses to other metabolites with similar size to NF proved the sensing mechanism was based on the size of the molecule. When compared with other sensing methods, the limit of detection (LOD) from direct measurement using SWCNT@MOF FET sensors is not the lowest ( Table S1 ). However, SWCNT@MOF FET sensors offer a versatile platform for detecting analytes that are challenging for conventional chemical sensing methods by leveraging size-based detection. By incorporating such sensors into arrays and applying machine learning methods, discernment between chemically similar analytes can be achieved. 50 − 52 Prior work has shown using such methods can lead to improvements in sensitivity and selectivity, 53 − 55 even for materials that were once considered nonspecific. 56 Furthermore, discrimination of analytes in complex chemical environments has been demonstrated. 57 − 59 Our exploration of these SWCNT@MOF FET materials has only just begun, leaving many aspects to investigate, such as the chemical interaction between functional groups inside the MOF channel and analyte molecules, sensor fabrication methods, and optimization of pore size for specific analytes.
Conclusions The heterogeneous growth of four different UiO-MOFs, namely, UiO-66, UiO-66-NH 2 , UiO-67, and UiO-67-CH 3 , on SWCNTs was demonstrated. The resulting SWCNT@UiO-MOF materials effectively combined porosity and electrical conductivity. These materials were subsequently fabricated into liquid-gated FET devices, marking the first instance of NF detection without the need for the sensor functionalization with biorecognition elements, such as aptamer or antibody. SWCNT@UiO-67 and SWCNT@UiO-67-CH 3 demonstrated concentration-based responses to NF in PBS solution, validating the size-based sensing mechanism proposed for the SWCNT@MOF FET sensors. Due to the matched size of the MOF pores and NF molecules, SWCNT@UiO-67 exhibited the best response among the tested SWCNT@UiO-MOF devices. A comparison between SWCNT@UiO-67 and SWCNT@UiO-67-CH 3 devices indicated that the sensor response was also related to the diffusion of NF into the MOF channel because of the incubation-based testing method. Three metabolites of different drugs were also tested, and the sensor device effectively screened out interference molecules with sizes larger than the MOF pores. However, since the sensor responded solely based on the size of the analyte, achieving specificity toward a single analyte cannot be accomplished with one type of SWCNT@MOF sensor alone. More SWCNT@MOF composites need to be synthesized to construct a sensor array with different pore sizes and channel chemistries to improve the selectivity of the SWCNT@MOF FET sensing platform. With the assistance of machine learning-driven discrimination, such sensor arrays hold the potential to generate unique signals for different analytes, enabling molecule identification in future work.
Single-walled carbon nanotube (SWCNT)@metal–organic framework (MOF) field-effect transistor (FET) sensors generate a signal through analytes restricting ion diffusion around the SWCNT surface. Four composites made up of SWCNTs and UiO-66, UiO-66-NH 2 , UiO-67, and UiO-67-CH 3 were synthesized to explore the detection of norfentanyl (NF) using SWCNT@MOF FET sensors with different pore sizes. Liquid-gated FET devices of SWCNT@UiO-67 showed the highest sensing response toward NF, whereas SWCNT@UiO-66 and SWCNT@UiO-66-NH 2 devices showed no sensitivity improvement compared to bare SWCNT. Comparing SWCNT@UiO-67 and SWCNT@UiO-67-CH 3 indicated that the sensing response is modulated by not only the size-matching between NF and MOF channel but also NF diffusion within the MOF channel. Additionally, other drug metabolites, including norhydrocodone (NH), benzoylecgonine (BZ), and normorphine (NM) were tested with the SWCNT@UiO-67 sensor. The sensor was not responding toward NH and or BZ but a similar sensing result toward NM because NM has a similar size to NF. The SWCNT@MOF FET sensor can avoid interference from bigger molecules but sensor arrays with different pore sizes and chemistries are needed to improve the specificity.
Experimental Section Preparation of SWCNT Stock Suspension SWCNTs (P3-SWNT, Carbon Solutions, Inc.) were prepared into 0.5 mg/mL solution of DMF. The solution was sonicated for 1 h to suspend SWCNTs. For each synthesis, SWCNT suspension was sonicated for 15 min before use. Preparation of Zr Oxide-Cluster Solution Seven mL of DMF was mixed with 4 mL of acetic acid, followed by 71 μL of 70 wt % zirconium propoxide solution in 1-propanol. The mixture was heated in an oven at 130 °C for 2 h until the solution appeared pale yellow. The resulting mixture was cooled to room temperature. Synthesis of SWCNT@UiO-MOF Composites Syntheses of UiO-MOFs were adapted from published methods. 60 , 61 Briefly, the SWCNT suspension was mixed with the MOF precursor solution. An oil bath was used for the heated reaction. Detailed amount and method for each composite is included in the Supporting Information . Fabrication of FET Devices Composites and SWCNTs were deposited on prefabricated interdigitated electrodes by using dielectrophoresis (DEP). The prefabricated interdigitated electrode area is 300 × 200 μm. Channels between the electrodes are 6 μm ( Figure S1 ). A Keithley 3390 Arbitrary Waveform Generator was used to generate a sine wave (10 V pp , 10 MHz). Three μL of samples suspension (0.5 mg/mL for composites, 0.1 mg/mL for SWCNT) was drop-cast on the device, and the sine wave was applied to the electrodes for 5 min. The same procedure was performed 2–3 times until conductive devices were achieved. The deposited devices were washed with water and then annealed at 200 °C for 1 h to remove the solvent residue. FET Sensing of Norfentanyl Norfentanyl solutions with concentrations from 1 ppb to 100 ppm were prepared by dissolving NF in PBS solution. FET measurements were performed using 300 μL of PBS or NF solution as liquid gating media. An Ag/AgCl reference electrode was in contact with the gating liquid. 50 mV bias voltage was applied to source-drain channel, while gate voltage swept from 0.6 to −0.6 V. All devices were stabilized with PBS solution until their FET curves remained the same after changing the gating PBS solution. After each measurement, devices were rinsed with DI water and blown dry with N 2 before the next gating liquid was added. Devices were incubated in gating liquid for 10 min before measurement. FET measurements were made with Keithley 2400 source meter units.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c17503 . Experimental methods, nitrogen adsorption isotherms, FET curves, and additional drug metabolites calibration plots ( PDF ) Dynamic view of NF molecules moving within the pores ( MP4 ) Supplementary Material Author Contributions The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. The authors declare no competing financial interest. Acknowledgments This work was supported by the Chem-Bio Diagnostics program grant HDTRA1-21-1-0009 from the Department of Defense Chemical and Biological Defense program through the Defense Threat Reduction Agency (DTRA).
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 26; 16(1):1361-1369
oa_package/c0/3b/PMC10788826.tar.gz
PMC10788827
38096263
Introduction Wearable thermal control devices have various applications, such as enhancing individual thermal comfort, 1 providing thermal feedback in virtual and augmented reality spaces, 2 serving as thermal camouflage against infrared detection, 3 and offering thermotherapy for health issues. 4 Researchers have investigated effective wearable thermal control devices from material, design, and system perspectives, employing air transport, 5 liquid transport, 6 thermoelectric elements, 7 phase-change materials, 8 and textiles. 9 Soft robotics, 10 − 14 which has attracted a lot of attention in recent years, also offers a unique opportunity to make wearable devices that are more adaptable, comfortable, and closely conform to the human body. Due to their high cooling capabilities, liquid transport systems have been commonly used for cooling 15 since their initial proposal in 1959. 16 Liquid-cooling garments have been developed for specialized users such as racing car drivers, surgeons, chemotherapy and multiple sclerosis patients, athletes, and hazardous material handlers. 17 In these devices, chilled or warmed liquids circulate through tubes via a pump, facilitating heat exchange with the human body. Other examples include liquid transport for wearable tactile presentation via chemical reactions 18 and weight control, 19 expanding the application range. However, pumps for liquid transport generate significant noise and weight, and their bulky systems make them cumbersome to wear, posing challenges for casual use ( Figure 1 , left side). Furthermore, their standardized design limits user choice and fails to cater to individual needs. Recently, small pumps have been developed based on various driving principles. Figure S1 and Table S1 show their sizes, maximum flow rates, driving mechanisms, and applications. 20 − 32 In particular, electrohydrodynamic (EHD) pumps, which operate silently and do not generate heat, are gaining attention as driving sources for wearable thermal control devices. EHD pumps produce a continuous silent flow in the working fluid as the charge injected by the electrodes moves according to the electric field. 33 Compared with other pumps, EHD pumps have a high maximum flow rate, determining the rate of heat transfer, making them suitable for thermal control devices. Recent studies have proposed EHD pumps made of stretchable materials, 20 modular designs for easy expansion, 34 and shaped fibers for integration with textiles. 21 In wearable thermal control devices using liquid transport systems, soft tubes are preferred for their affinity with humans. Soft tubes have the advantage of not interfering with human movement. In unmaintained conditions, such as in human living spaces, physical contact with human motion and the external environment can cause tube deformation, potentially leading to significant issues such as downtime, reduced operating efficiency, and failures caused by localized pressure concentrations. Therefore, real-time flow rate sensing is required to detect blockages in the flow path. Adding a flow rate sensor to the system may require an additional power supply system, increasing the bulk and complexity of the system. Researchers have effectively demonstrated self-sensing actuators as an approach for system miniaturization and multifunctionality. 35 − 39 The realization of EHD pumps with self-sensing functionalities can lead to innovative smart pumps. Despite this, there have been few attempts to realize self-sensing pumps. Although, researchers have developed self-sensing piezoelectric pumps using piezo actuators. 40 The driving mechanism of a piezo pump uses mechanical vibration, which produces noise. A better approach for pumps in wearable systems is to implement silent operations such as EHD pumps. Recent research has investigated the response of the current to the flow rate flowing externally into the EHD pump and its sensing mechanism. 41 However, this approach alone is insufficient in the context of wearable devices, where real-time self-sensing of flow rates is essential. It is necessary to establish a model equation to check the response to self-sensing flow rates and to validate the behavior based on this equation. This is the core of our approach in the development of smart EHD pumps with a self-sensing functionality. Herein, we propose a liquid transport system with a pocketable and smart EHD-driven pump (PSEP) ( Figure 1 , right side). Unlike conventional liquid transport systems, PSEP silently transports the working liquid without additional components such as a power supply system or inside the pump for its self-sensing ability; thus, the entire system is simple, compact, and lightweight. Our system is portable and easy to wear and take off; thus, it does not restrict the user’s range of movement or clothing design. First, we demonstrate the current response of PSEP to the flow rate of the loaded tube. Subsequently, we validated the PSEP model and evaluated its performance and reliability. Furthermore, we developed a PSEP-driven liquid transport system for wearable thermal control. Users can store the main elements, such as the PSEP, power/control circuit, and heat source, in a pocket and perform various functions such as heating, cooling, and flow blockage detection through a smartphone. Moreover, the self-sensing of PSEP promotes efficient and reliable operation through feedback from the user. PSEP supports wearable thermal control devices by solving the problems of noise, weight, and large size and contributes to the next generation of wearable thermal control devices designed to meet individual needs.
Results and Discussion PSEP Concept Figure 2 a shows the developed PSEP, of size 21 cm 3 (10 × 2 × 1.05 cm) and weight 10 g. The PSEP design is based on conventional EHD pumps comprising three main components: electrodes, flow channels, and an insulating fluid. The charges injected from the electrodes into the insulating fluid move along the electric field, generating a unidirectional flow. As shown in Figure 2 b, the PSEP comprises two electrode layers with interdigitated electrode structures and a flow channel layer created using a single adhesive sheet. Each interdigitated electrode structure contains 10 pairs of electrodes, accelerating the EHD flow and producing an increased output. 42 Moreover, these electrodes simplify expansion and fabrication, integrate wiring, and reduce fluid friction. The straightforward design and use of accessible and easily processable materials enable rapid and cost-effective production. The PSEP transports liquids by applying kV-order voltage and estimates the flow rate based on the change in current, as shown in Figure 2 c. Figure 2 d illustrates the EHD pumping mechanism of the PSEP. In our PSEP, ions are injected into the working fluid from the electrode under the influence of a strong electric field. When the applied electric field reaches critical strength, ions can overcome the energy barrier and tunnel directly from the cathode’s surface into the working fluid. Injection in insulating fluids containing many electronegative molecules, such as fluorine-based substances, is more likely to occur at the negative electrode because of its lower energy barrier compared to the positive electrode. The Coulomb force pushes the injected ions along the electric field lines. During this process, ions repeatedly collide with the fluid molecules. Consequently, a flow is generated from the negative to the positive electrode. Figure 2 e shows the self-sensing mechanism of the PSEP flow rate. A current of the order of μA is generated in the PSEP, and the output flow rate can be estimated from the current. Therefore, it has the advantage of keeping the system lightweight and compact without additional flow rate sensors and power supplies. In our previous work, 41 we reported the development of a hydraulically driven suction cup with contact detection that uses an EHD pump and we found the relation between flow rate, Q , and electrical current between electrodes, I , as where I dri is the drift current which follows Ohm’s law, I dif is the diffusion current arising from the concentration gradient of ions due to the flow of liquid, 43 and α is a constant. We assume that the same relationship holds for the flow rate of our self-sensing device. α is constant and is derived from previous studies of reactor model equations. This constant is determined by the ion concentration and the geometrical parameters of the electrodes and flow paths. This study treats α which is a coefficient of one-third the power of the flow rate as sensitivity. Although we ignored the effect of the EHD pump’s instability in our previous study, it becomes significant when we use the device for a long time. Careful selection of the liquid and electrode materials has a high potential for solving instability. Here, we propose a simple data processing method to exclude instability. We assumed that the main cause of instability was the working fluid, whose chemical composition gradually changed over time. Since the electrical conductivity of the fluid changes, the drift current rather than the diffusion current fluctuates: I ( t ) = I dif ( Q ) + I dri ( t ). To eliminate time dependence, we define a reference time, t ′, and calculate the current variation which we use as an evaluation parameter. Assuming that the instability effect is much slower than the change of Q and t ′ – t is sufficiently small, the terms in the second curved bracket can be ignored. t ′ can be considered as the serial reference times defined at certain intervals. In this case, the interval should be longer than the dynamics of the flow rate’s change that must be detected and be shorter than those of the instability dynamics. Therefore, in this study, we define t ′ as the time at which I ( t ′) becomes the maximum in the interval for the following reason. Because the flow in the EHD system is sensitive to the environment, I fluctuates even when the setting is constant. Because the fluctuation is caused by hydrodynamic dissipation, I is maximized when steady flow is ideally realized. Therefore, it is reasonable to adopt I max = I ( t ′) as the quantity characterizing the steady state without a load. Then, we obtain where I dif,max denotes the maximum diffusion current without any fluctuations or loads. Equation 3 shows the calibration curve of the PSEP for detecting the flow rate by the change in current. In this study, we constructed a closed system in which the liquid is circulated through the PSEP, as shown in Figure 2 c. We demonstrated self-sensing functionality and investigated the relationship between the flow rate and current in the PSEP. An applied voltage of 3 kV to the PSEP causes EHD pumping and circulation of the working fluid. The experimental results in Figure 2 f show a flow rate of 20 mL/min and a current of 7 μA. When a 25 N load was applied to deform the tube d = 2.5 mm for 40 s, the flow rate dropped to 0 mL/min, and the current decreased by a maximum of 1.5 μA. When the load was removed, the current increased by 1.5 μA, returning to the preload input state. These experimental results suggest that the change in current is due to the theoretically estimated flow rate-dependent diffusion current, demonstrating PSEP’s self-sensing through current monitoring. Validation of the Model Equation for Self-Sensing We investigated the relationship between the PSEP flow rate and current to validate the model equation. We varied the flow rate and current by controlling the tube’s deformation, d = 2.0–2.5 mm. Figure 3 a,b shows the time evolution of the flow rate and current of the PSEP under the 3.0 kV. The two dots in each curve indicate the maximum and minimum values of Q and I , denoted as Q max , Q min , I max and I min . As expected, in the initial 0 to 20 s period, before deformation was applied to the tube, fluctuations in current were observed despite only slight changes in flow rate. Moreover, we observed that the greater the change in flow rate, the greater the change in current. Figure 3 c shows the relationship between the change in current (Δ I = I max – I ( t )) and the flow rate when a load is imposed ( Q min ) at each applied voltage. The data was fitted for each applied voltage using eq 3 . The fittings are better for V = 2.5 and 3.0 kV, suggesting that imposing a higher voltage induces instability and is unsuitable for the sensor. To verify the validity of eq 3 , we performed fitting using where n is a variable. We compared the coefficient of determination R 2 with varying n . Here, R 2 indicates the goodness of fit; as R 2 is closer to 1, the better the model equation fits our experimental data. Figure 3 d shows these results. The peak occurs at approximately n = 1/3 (red dashed lines), indicating the validity of this model equation. Performances of PSEP We evaluated the PSEP performance at each applied voltage. We estimated α from the experimental results obtained by applying least-squares fitting. In Figure 3 c, eq 4 is drawn with the estimated α as a dashed line. α which corresponds to the sensitivity and coefficient of determination R 2 increases as the applied voltage decreases, as shown in Figure 3 e,f. In contrast, the maximum flow rate increases with an increase in applied voltage, as shown in Figure 3 g. The maximum flow rate is 30 mL/min at 3.5 kV voltage. However, at 3.5 kV, we observed the dielectric breakdown during the experiment, making it unsuitable for long-term use. Therefore, 3.0 kV or lower applied voltage was set for sensing and pumping and to ensure stability. Furthermore, as shown in Figure 3 h, PSEP saves energy when the flow rate Q = 0 compared to when the flow rate is maximum. To investigate the reliability and cyclic performance of the self-sensing performance, we conducted 10-cycle tests at a voltage of 3.0 kV; the tube was deformed cyclically in three patterns at 0.25, 0.05, and 0.025 Hz. Figure 4 a–c shows the time evolution of the flow rate and current at each input frequency. At all input frequencies, the current responded to changes in the flow rate. Figure 4 d–f shows the relationship between the flow rate and the change in current at each input frequency. Here, we use I max to define Δ I as the maximum current before the start of the cycle. We categorized the plots into loading (orange) and unloading (green) processes. We observed hysteresis and fluctuations in the current at all input frequencies. Figure 4 g–i shows theoretical verification similar to that in Figure 3 d. Here, the average R 2 over all the cycles, distinguishing the loading and unloading processes were evaluated. First, we focused on the input frequency. At lower frequencies, the change in the current to flow rate was large and the average R 2 peaked at approximately n = 1/3. However, the change in the current tended to dissipate from cycle to cycle. We attribute this to the observed dynamics of instability caused by long measurement times rather than frequency effects. This result indicates the limitation of continuous measurement intervals and the need for a redefinition of the reference time t ′. However, at high frequencies, the ratio of the change in the current to the flow rate was small, and the peak of the average R 2 deviated from n = 1/3, probably because the next input started before the current relaxation. In addition, we derived eq 3 as the change in flow from the steady state; this assumption breaks at high frequencies. Therefore, the flow rate can be properly estimated up to 0.05 Hz. As a reference, we have calculated the response time ( t 2 – t 1 ) as the difference between the time of the lower peak of the flow rate ( t 1 ) and that of current ( t 2 ) at 0.05 Hz, and the response time within 2 s ( Figure S7 ). Second, we focus on the loading and unloading processes. Compared with loading, the unloading process increased the average R 2 peak and deviated from n = 1/3. Loading and unloading are unsteady processes that change the channel width; however, loading reduces the Reynolds number, whereas unloading increases it. Since our theoretical equation is derived assuming steady-state conditions, the approximation is more likely to hold for operations that lower the Reynolds number compared to operations that increase it. The result indicates that the self-sensing of the PSEP was more accurately measured by loading than by unloading. PSEP for Wearable Thermal Control Application We designed a pocketable system to regulate liquid circulation employing PSEP, targeting wearable thermal control applications. The primary constituents of the system, as shown in Figure 5 a, encode the PSEP, heat source, and circuit for the control and power supply. The entire system fits snugly in a T-shirt pocket with dimensions of 10 × 10 cm and weighs a mere 30 g including the tubes and working liquid. Incorporating the tube within an arm cover protects against ultraviolet while fostering thermal comfort for the user. A schematic of the system, provided in Figure 5 b, elucidates the transportation process of the chilled or warmed working fluid by PSEP, facilitated by a voltage of 2.5 kV derived from the circuit portrayed in Figure 5 c. After the heat exchange with an individual through the tubes, the working fluid alters the temperature of the target area before being cooled or heated. In the operation of our PSEP, minimal heat generation is expected. The heat can be estimated from the input energy, calculated as the product of current and voltage. For instance, as illustrated in Figure 3 b, with a voltage of 3 kV and a current of 6.5 μA, the input energy is calculated to be 19.5 mW, indicating that significant heating of the PSEP is unlikely. Indeed, no notable temperature changes were observed in previous studies of EHD pumps. This characteristic of low heat generation makes the PSEP suitable for applications that require both heating and cooling capabilities. The closed-loop design of the system and the principle of PSEP’s pumping make it less susceptible to mechanical disturbances. Also, the self-sensing feature enhances the reliability of heat transport. The system can detect excessive load on the tube, construing such an event as inability to transport fluid, and accordingly issue a warning to the user. Connecting to a smartphone via Wi-Fi is another important feature. This enables the user to regulate and manage the system while it is stored in a pocket. Figure 5 d shows an intuitive graphical user interface (GUI) operating on a smartphone designed to facilitate user interaction. We observed changes in temperature by using an infrared camera (FLIER ONE Pro, FLIR). The temperature variations during heating and cooling were Δ T heat = 22.2–20.9 = 1.3 °C and Δ T cool = 19.8–22.1 = −2.3 °C, as shown in Figure 6 a,b, respectively. Figure S9 also shows the temperature change versus time for each process. We demonstrated its effectiveness as a thermal control device. Movies S1 and S2 show the heating and cooling operations, respectively. Figure 6 c shows the detection of blockages. We deliberately blocked the tube and monitored it from the smartphone GUI. Movie S3 shows the sequence of operation including device operation. Thus, PSEP innovations featuring quiet liquid transport and self-sensing capabilities offer high reliability and will allow users to incorporate thermal control features into their garments.
Results and Discussion PSEP Concept Figure 2 a shows the developed PSEP, of size 21 cm 3 (10 × 2 × 1.05 cm) and weight 10 g. The PSEP design is based on conventional EHD pumps comprising three main components: electrodes, flow channels, and an insulating fluid. The charges injected from the electrodes into the insulating fluid move along the electric field, generating a unidirectional flow. As shown in Figure 2 b, the PSEP comprises two electrode layers with interdigitated electrode structures and a flow channel layer created using a single adhesive sheet. Each interdigitated electrode structure contains 10 pairs of electrodes, accelerating the EHD flow and producing an increased output. 42 Moreover, these electrodes simplify expansion and fabrication, integrate wiring, and reduce fluid friction. The straightforward design and use of accessible and easily processable materials enable rapid and cost-effective production. The PSEP transports liquids by applying kV-order voltage and estimates the flow rate based on the change in current, as shown in Figure 2 c. Figure 2 d illustrates the EHD pumping mechanism of the PSEP. In our PSEP, ions are injected into the working fluid from the electrode under the influence of a strong electric field. When the applied electric field reaches critical strength, ions can overcome the energy barrier and tunnel directly from the cathode’s surface into the working fluid. Injection in insulating fluids containing many electronegative molecules, such as fluorine-based substances, is more likely to occur at the negative electrode because of its lower energy barrier compared to the positive electrode. The Coulomb force pushes the injected ions along the electric field lines. During this process, ions repeatedly collide with the fluid molecules. Consequently, a flow is generated from the negative to the positive electrode. Figure 2 e shows the self-sensing mechanism of the PSEP flow rate. A current of the order of μA is generated in the PSEP, and the output flow rate can be estimated from the current. Therefore, it has the advantage of keeping the system lightweight and compact without additional flow rate sensors and power supplies. In our previous work, 41 we reported the development of a hydraulically driven suction cup with contact detection that uses an EHD pump and we found the relation between flow rate, Q , and electrical current between electrodes, I , as where I dri is the drift current which follows Ohm’s law, I dif is the diffusion current arising from the concentration gradient of ions due to the flow of liquid, 43 and α is a constant. We assume that the same relationship holds for the flow rate of our self-sensing device. α is constant and is derived from previous studies of reactor model equations. This constant is determined by the ion concentration and the geometrical parameters of the electrodes and flow paths. This study treats α which is a coefficient of one-third the power of the flow rate as sensitivity. Although we ignored the effect of the EHD pump’s instability in our previous study, it becomes significant when we use the device for a long time. Careful selection of the liquid and electrode materials has a high potential for solving instability. Here, we propose a simple data processing method to exclude instability. We assumed that the main cause of instability was the working fluid, whose chemical composition gradually changed over time. Since the electrical conductivity of the fluid changes, the drift current rather than the diffusion current fluctuates: I ( t ) = I dif ( Q ) + I dri ( t ). To eliminate time dependence, we define a reference time, t ′, and calculate the current variation which we use as an evaluation parameter. Assuming that the instability effect is much slower than the change of Q and t ′ – t is sufficiently small, the terms in the second curved bracket can be ignored. t ′ can be considered as the serial reference times defined at certain intervals. In this case, the interval should be longer than the dynamics of the flow rate’s change that must be detected and be shorter than those of the instability dynamics. Therefore, in this study, we define t ′ as the time at which I ( t ′) becomes the maximum in the interval for the following reason. Because the flow in the EHD system is sensitive to the environment, I fluctuates even when the setting is constant. Because the fluctuation is caused by hydrodynamic dissipation, I is maximized when steady flow is ideally realized. Therefore, it is reasonable to adopt I max = I ( t ′) as the quantity characterizing the steady state without a load. Then, we obtain where I dif,max denotes the maximum diffusion current without any fluctuations or loads. Equation 3 shows the calibration curve of the PSEP for detecting the flow rate by the change in current. In this study, we constructed a closed system in which the liquid is circulated through the PSEP, as shown in Figure 2 c. We demonstrated self-sensing functionality and investigated the relationship between the flow rate and current in the PSEP. An applied voltage of 3 kV to the PSEP causes EHD pumping and circulation of the working fluid. The experimental results in Figure 2 f show a flow rate of 20 mL/min and a current of 7 μA. When a 25 N load was applied to deform the tube d = 2.5 mm for 40 s, the flow rate dropped to 0 mL/min, and the current decreased by a maximum of 1.5 μA. When the load was removed, the current increased by 1.5 μA, returning to the preload input state. These experimental results suggest that the change in current is due to the theoretically estimated flow rate-dependent diffusion current, demonstrating PSEP’s self-sensing through current monitoring. Validation of the Model Equation for Self-Sensing We investigated the relationship between the PSEP flow rate and current to validate the model equation. We varied the flow rate and current by controlling the tube’s deformation, d = 2.0–2.5 mm. Figure 3 a,b shows the time evolution of the flow rate and current of the PSEP under the 3.0 kV. The two dots in each curve indicate the maximum and minimum values of Q and I , denoted as Q max , Q min , I max and I min . As expected, in the initial 0 to 20 s period, before deformation was applied to the tube, fluctuations in current were observed despite only slight changes in flow rate. Moreover, we observed that the greater the change in flow rate, the greater the change in current. Figure 3 c shows the relationship between the change in current (Δ I = I max – I ( t )) and the flow rate when a load is imposed ( Q min ) at each applied voltage. The data was fitted for each applied voltage using eq 3 . The fittings are better for V = 2.5 and 3.0 kV, suggesting that imposing a higher voltage induces instability and is unsuitable for the sensor. To verify the validity of eq 3 , we performed fitting using where n is a variable. We compared the coefficient of determination R 2 with varying n . Here, R 2 indicates the goodness of fit; as R 2 is closer to 1, the better the model equation fits our experimental data. Figure 3 d shows these results. The peak occurs at approximately n = 1/3 (red dashed lines), indicating the validity of this model equation. Performances of PSEP We evaluated the PSEP performance at each applied voltage. We estimated α from the experimental results obtained by applying least-squares fitting. In Figure 3 c, eq 4 is drawn with the estimated α as a dashed line. α which corresponds to the sensitivity and coefficient of determination R 2 increases as the applied voltage decreases, as shown in Figure 3 e,f. In contrast, the maximum flow rate increases with an increase in applied voltage, as shown in Figure 3 g. The maximum flow rate is 30 mL/min at 3.5 kV voltage. However, at 3.5 kV, we observed the dielectric breakdown during the experiment, making it unsuitable for long-term use. Therefore, 3.0 kV or lower applied voltage was set for sensing and pumping and to ensure stability. Furthermore, as shown in Figure 3 h, PSEP saves energy when the flow rate Q = 0 compared to when the flow rate is maximum. To investigate the reliability and cyclic performance of the self-sensing performance, we conducted 10-cycle tests at a voltage of 3.0 kV; the tube was deformed cyclically in three patterns at 0.25, 0.05, and 0.025 Hz. Figure 4 a–c shows the time evolution of the flow rate and current at each input frequency. At all input frequencies, the current responded to changes in the flow rate. Figure 4 d–f shows the relationship between the flow rate and the change in current at each input frequency. Here, we use I max to define Δ I as the maximum current before the start of the cycle. We categorized the plots into loading (orange) and unloading (green) processes. We observed hysteresis and fluctuations in the current at all input frequencies. Figure 4 g–i shows theoretical verification similar to that in Figure 3 d. Here, the average R 2 over all the cycles, distinguishing the loading and unloading processes were evaluated. First, we focused on the input frequency. At lower frequencies, the change in the current to flow rate was large and the average R 2 peaked at approximately n = 1/3. However, the change in the current tended to dissipate from cycle to cycle. We attribute this to the observed dynamics of instability caused by long measurement times rather than frequency effects. This result indicates the limitation of continuous measurement intervals and the need for a redefinition of the reference time t ′. However, at high frequencies, the ratio of the change in the current to the flow rate was small, and the peak of the average R 2 deviated from n = 1/3, probably because the next input started before the current relaxation. In addition, we derived eq 3 as the change in flow from the steady state; this assumption breaks at high frequencies. Therefore, the flow rate can be properly estimated up to 0.05 Hz. As a reference, we have calculated the response time ( t 2 – t 1 ) as the difference between the time of the lower peak of the flow rate ( t 1 ) and that of current ( t 2 ) at 0.05 Hz, and the response time within 2 s ( Figure S7 ). Second, we focus on the loading and unloading processes. Compared with loading, the unloading process increased the average R 2 peak and deviated from n = 1/3. Loading and unloading are unsteady processes that change the channel width; however, loading reduces the Reynolds number, whereas unloading increases it. Since our theoretical equation is derived assuming steady-state conditions, the approximation is more likely to hold for operations that lower the Reynolds number compared to operations that increase it. The result indicates that the self-sensing of the PSEP was more accurately measured by loading than by unloading. PSEP for Wearable Thermal Control Application We designed a pocketable system to regulate liquid circulation employing PSEP, targeting wearable thermal control applications. The primary constituents of the system, as shown in Figure 5 a, encode the PSEP, heat source, and circuit for the control and power supply. The entire system fits snugly in a T-shirt pocket with dimensions of 10 × 10 cm and weighs a mere 30 g including the tubes and working liquid. Incorporating the tube within an arm cover protects against ultraviolet while fostering thermal comfort for the user. A schematic of the system, provided in Figure 5 b, elucidates the transportation process of the chilled or warmed working fluid by PSEP, facilitated by a voltage of 2.5 kV derived from the circuit portrayed in Figure 5 c. After the heat exchange with an individual through the tubes, the working fluid alters the temperature of the target area before being cooled or heated. In the operation of our PSEP, minimal heat generation is expected. The heat can be estimated from the input energy, calculated as the product of current and voltage. For instance, as illustrated in Figure 3 b, with a voltage of 3 kV and a current of 6.5 μA, the input energy is calculated to be 19.5 mW, indicating that significant heating of the PSEP is unlikely. Indeed, no notable temperature changes were observed in previous studies of EHD pumps. This characteristic of low heat generation makes the PSEP suitable for applications that require both heating and cooling capabilities. The closed-loop design of the system and the principle of PSEP’s pumping make it less susceptible to mechanical disturbances. Also, the self-sensing feature enhances the reliability of heat transport. The system can detect excessive load on the tube, construing such an event as inability to transport fluid, and accordingly issue a warning to the user. Connecting to a smartphone via Wi-Fi is another important feature. This enables the user to regulate and manage the system while it is stored in a pocket. Figure 5 d shows an intuitive graphical user interface (GUI) operating on a smartphone designed to facilitate user interaction. We observed changes in temperature by using an infrared camera (FLIER ONE Pro, FLIR). The temperature variations during heating and cooling were Δ T heat = 22.2–20.9 = 1.3 °C and Δ T cool = 19.8–22.1 = −2.3 °C, as shown in Figure 6 a,b, respectively. Figure S9 also shows the temperature change versus time for each process. We demonstrated its effectiveness as a thermal control device. Movies S1 and S2 show the heating and cooling operations, respectively. Figure 6 c shows the detection of blockages. We deliberately blocked the tube and monitored it from the smartphone GUI. Movie S3 shows the sequence of operation including device operation. Thus, PSEP innovations featuring quiet liquid transport and self-sensing capabilities offer high reliability and will allow users to incorporate thermal control features into their garments.
Conclusions This study theoretically examines the relationship between the flow and current in EHD pumps and introduces PSEP, which is based on the conventional EHD pump design and in this study, the relationship between flow rate and current, which is the output of an EHD pump, is investigated theoretically and experimentally, and a PSEP with pumping and flow rate self-detection functions is introduced from a conventional EHD pump design. We experimentally confirmed its response, and the proposed theory eliminated the current instability. Theoretical verification and performance evaluation showed that the flow rate multiplier was equal to 1/3 as indicated by the theoretical equation, and the coefficient of determination was above 0.9, indicating the consistency of the model. Pressure and flow rate as pumping performance and sensitivity, accuracy, and responsiveness as self-sensing performance were comprehensively evaluated. Finally, we present a breakthrough in wearable thermal control technology, namely, a lightweight and compact liquid circulation system powered by PSEP. Our system fits into the 10 × 10 cm T-shirt pocket and weighs only 100 g. Our system facilitates personal thermal management by providing cooling and heating functions, effectively combining fashion and functionality. Moreover, Wi-Fi connectivity allows remote control and expands the operability of the user. Emphatically, the self-sensing function provides critical feedback on flow blockages, enhancing the system’s reliability. This significant integration of practicality and style in wearable thermal control paves the way for the future of personalized thermal control. In future research, we will first focus on improving the materials and shapes of electrodes for long-term and heightened sensing accuracy. We will also conduct long-term cycle and continuous operation tests to validate the reliability of our PSEP. Subsequent to this, we plan on integrating multiple miniaturized PSEP arrays, precisely engineered to identify malfunctions and relay real-time feedback to users. Drawing from pioneering studies that illustrated the use of self-healing liquids transported by EHD pumps, 44 our vision encompasses channeling these innovative liquids to the malfunction points pinpointed by PSEP, fortifying the system’s resilience. The proposed PSEP has the potential to become a new generation of universally applicable wearable cooling and heating devices because of its multifunctionality, silent operation, and lightweight.
Seamlessly fusing fashion and functionality can redefine wearable technology and enhance the quality of life. We propose a pocketable and smart electrohydrodynamic pump (PSEP) with self-sensing capability for wearable thermal controls. Overcoming the constraints of traditional liquid-cooled wearables, PSEP with dimensions of 10 × 2 × 1.05 cm and a weight of 10 g is sufficiently compact to fit into a shirt pocket, providing stylish and unobtrusive thermal control. Silent operation coupled with the unique self-sensing ability to monitor the flow rate ensures system reliability without cumbersome additional components. The significant contribution of our study is the formulation and validation of a theoretical model for self-sensing in EHD pumps, thereby introducing an innovative functionality to EHD pump technology. PSEP can deliver temperature changes of up to 3 °C, considerably improving personal comfort. Additionally, the PSEP system features an intuitive, smartphone-compatible interface for seamless wireless control and monitoring, enhancing user interaction and convenience. Furthermore, the ability to detect and notify users of flow blockages, achieved by self-sensing, ensures an efficient and long-term operation. Through its blend of compact design, intelligent functionality, and stylish integration into daily wear, PSEP reshapes the landscape of wearable thermal control technology and offers a promising avenue for enhancing personal comfort in daily life.
Experimental Section Working Liquid The physical properties of the working fluid, (fluorinated liquid) (Novec 7300, 3M), are as follows. EHD pumping requires low conductivity (<10 –7 S/m), high dielectric withstand voltage, and high dielectric constant. Novec 7300 has a conductivity of 10 –9 S/m, a dielectric withstand voltage of 5–6 kV (gap: 0.5 mm), and a dielectric constant of 6.1. It also has a boiling point of 76 °C, high thermal conductivity, ultralow toxicity, zero ozone depletion potential, zero flash point, and nonflammability, making it suitable for wearable thermal control devices. Flow State Estimation in PSEP for Theoretical Validation PSEP has a channel height of 0.5 mm, a channel width of 5 mm, a channel cross-sectional area of 0.025 mm 2 , and a hydraulic diameter of 0.910 mm. Reynolds number Re = 714 < 2000 at the maximum flow velocity of 0.549 m/s (30 mL/min) at a voltage of 3.0 kV. Therefore, the flow investigated in this study was within the laminar flow range for which the model equation was valid. GUI on Smartphone The GUI of the proposed wearable device is intuitive and runs on a user’s smartphone. This GUI is implemented using the Arduino IOT remote IoT platform. Figure 4 D shows the screen of the GUI on the smartphone, which has five functions. (1): Turn on/off the pumping function of PSEP. (2): Lights up red in the case of blockage and green if there is no problem with the operation. The change in current of 0.3 μA was used as the threshold. Turns on/off the (3): cooling (4): heating function of the Peltier element. (5): The output power of the Peltier element is also adjustable by the slider. The number indicates the percentage of the absolute value of the voltage applied to the Peltier element. (6): Current values are monitored in real-time, and irregular inputs can be checked. The GUI of the proposed wearable device is intuitive and runs on the user’s smartphone. Figure 4 D shows the GUI screen on a smartphone. The GUI was implemented using an Arduino IOT remote IoT platform. GUI has five functions. (1): On/off PSEP pumping function. (2): Indicate blockage. The red light indicates blockage, and green light indicates no problem with the operation; the change in current of 0.3 μA was used as the threshold. Three and (4): On/off the cooling and heating function of the Peltier element. (5): The output power of the Peltier element was adjusted by the slider. The number indicates the percentage of the absolute value of the voltage applied to the Peltier element. (6): Monitored current values in real time and checked with irregular inputs.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c15274 . Plot of various pump sizes and flow rates; pump operating principle, maximum flow rate, size, and application are summarized; electrode design of PSEP; digital and rapid fabrication process of electrode, substrate, and channel layers; experimental setup; compression test to clarify the relationship of tube deformation to the load applied to the tube; FFT analysis of currents and flow rates for cycling tests; the response time ( t 2 -t 1 ) as the difference between the time of the lower peak of the flow rate ( t 1 ) and that of current ( t 2 ) at 0.05 Hz input; electrical schematic of the power/control circuit; and temperature versus time in heating (left) and cooling (right) demonstrations ( PDF ) Heating function of wearable application ( MP4 ) Cooling function of wearable application ( MP4 ) Blockage detection, wireless control, and monitoring functions of wearable application ( MP4 ) Supplementary Material The authors declare no competing financial interest. Acknowledgments This work was supported by the JSPS KAKENHI (grant number 18H05473) and a Grant-in-Aid for JSPS Fellows (grant number 21J23563). We thank the Japan Society for the Promotion of Science for its support under Grants-in-Aid for Scientific Research (B)21H01293 and (A)21H04882.
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 14; 16(1):1883-1891
oa_package/83/cd/PMC10788827.tar.gz
PMC10788828
38117934
Introduction Endothelialization is the process by which endothelial cells (ECs) adhere, migrate, and proliferate to form the endothelial tissue. This process occurs naturally on biological substrates, such as the extracellular matrix (ECM), and can be induced on synthetic materials including polymers and hydrogels. In vivo, endothelial tissue comprises a monolayer of squamous cells lining the internal surfaces of the circulatory and lymphatic systems, firmly anchored to the vessel’s ECM. Clinically, endothelialization is particularly critical for cardiovascular devices like vascular grafts and stents, where it promotes the formation of a compatible endothelial layer on their surfaces. 1 The presence of endothelial tissue on these devices has been shown to enhance their long-term patency by reducing the risks of thrombosis and restenosis. 2 − 5 However, the spontaneous formation of such tissue on synthetic vascular grafts (SVGs), especially those with an inner diameter of less than 6 mm, is an uncommon occurrence, which is a significant factor in their lower patency rates compared to autologous vascular grafts. 6 − 9 Despite the clinical preference for autologous grafts in bypass surgeries, they are not without their limitations including donor site morbidity and limited availability. Indeed, an estimated 10–20% of patients requiring a coronary artery bypass graft do not have suitable autologous tissue, often due to factors like varicosities, deep venous thrombosis, prior surgical interventions, or suboptimal vessel quality. 10 , 11 For these patients, there is a pressing clinical need for a synthetic biomaterial that can serve as an equivalent or superior alternative to autologous tissue. A multitude of materials have been explored for their potential to replace autologous vasculature, such as synthetic and natural polymers, decellularized matrices, and self-assembled tissue constructs. 12 Among these, small-molecule conjugation has emerged as an effective strategy to enhance endothelialization. 13 Yet, the prevalent molecules designed to attract and support the proliferation of endothelial progenitor cells (EPCs) and ECs face several challenges. They tend to be costly, present difficulties in functionalization onto novel materials, and often lack the necessary structural stability, specificity in capturing target cells, and essential biological functions which together have curtailed their practical applications and clinical viability. 14 Moreover, while some tissue-engineered vascular grafts (TEVGs) have demonstrated promising results, surpassing autologous grafts or standard SVGs in certain aspects, their high production costs remain a significant barrier to widespread clinical use. 15 The field stands in need of a surface treatment approach that is not only conducive to endothelialization but also scalable, cost-effective, and nonthrombogenic, to facilitate the clinical adoption of innovative SVG materials. The inherent surface characteristics of biomaterials, specifically roughness and chemical composition, are paramount in the endothelialization of implanted vascular biomaterials. 16 Surface roughness, for instance, has been shown to bolster EC adhesion and proliferation, providing essential topographical cues that facilitate cell attachment. 17 Moreover, the presence of certain functional groups, including carboxylic acids and amino groups, has been recognized for their role in improving cell attachment and proliferation. 18 Beyond macro-scale textures, nanoscale roughness emerges as a critical feature for modulating EC behavior, further promoting endothelialization across diverse material surfaces. 19 Research involving polymers like polyethylene reveals that nanoscale surface modifications can substantially improve EC adhesion, spreading, and proliferation. 20 Advances in biomaterial fabrication techniques, such as electrospinning, nanoimprint lithography, combinatorial drug loading, and enhanced surgical methods, have led to the development of micro- and nanostructured surfaces with precise roughness control. These innovations enable more intricate exploration into the effects of nanoscale features on endothelialization. 21 The convergence of these findings and technological advancements holds great promise for cardiovascular biomaterial design, suggesting that the integration of nanoscale roughness alongside hydrophilicity and surface charge into biomaterial surfaces could significantly augment endothelialization. Reactive ion plasma (RIP) treatment is a powerful and versatile technique for creating nanoscale roughness on polymer surfaces. This technique involves the generation of reactive species in a low-pressure plasma environment, which interacts with the polymer surface, altering its chemical composition, topography, and wettability. RIP treatment has been demonstrated to improve the endothelialization of various polymers, including polyurethane (PU), polytetrafluoroethylene (ePTFE), and poly(vinyl alcohol) (PVA). 22 − 24 The degree to which the polymeric surface, and therefore its endothelizability, is altered depends on the RIP ion energy, dose, and ion species, which is determined by the precursor gas used to generate plasma. 25 By modifying surface properties such as roughness and introducing functional groups, RIP treatment can increase the adhesion, migration, and proliferation of ECs on polymer surfaces, thereby promoting endothelialization. Additionally, RIP treatment provides sterilization benefits, as the reactive species can also eradicate microorganisms, making it a dual-purpose technique. 26 This feature is particularly advantageous for cardiovascular biomaterials, which necessitate both sterility and enhanced endothelialization potential. However, the efficacy of RIP in promoting endothelialization may be diminished over time. This is due to time-dependent thermodynamically driven processes such as hydrophobic recovery, a process where polymer chains reorient and low-surface-energy species migrate to the surface, thus potentially reversing the initial beneficial modifications of the RIP treatment. 27 Such hydrophobic recovery could adversely affect the long-term endothelial compatibility of RIP-treated surfaces, as the initial improvements in wettability, surface chemistry, and topography subside. 28 , 29 Furthermore, the material’s behavior over time is intricately linked to the specific RIP treatment parameters and storage conditions. 30 The phenomenon of hydrophobic recovery underlines the necessity for ongoing research to elucidate its mechanisms and devise methods to counteract its effects. This understanding is crucial to ensuring that the biocompatibility and stability of cardiovascular biomaterials are maintained in the long term. PVA is recognized for its nonthrombogenic and inert qualities, making it a viable hydrogel material for various biomedical applications, including small-diameter vascular grafts. Its suitability is further underscored by its mechanical properties—compliance and burst pressure—which can be finely tuned to align with those of native blood vessels. 31 The versatility of PVA is evident in its ability to be synthesized with different cross-linkers, allowing for a range of chemical and mechanical characteristics to be achieved. 32 , 33 Sodium trimetaphosphate (STMP), a food-grade cross-linker, is frequently utilized for hydrogel formation through a process known as phosphoesterification. This process creates a network of cross-links between phosphate groups and hydroxyl groups within the hydrogel, effectively forming a stable matrix. 31 , 34 STMP is preferred over other cross-linkers like formaldehyde and glutaraldehyde, which have been associated with increased thrombogenicity and cytotoxicity of the resulting hydrogels. Notably, PVA cross-linked with STMP (STMP-PVA) and functionalized with aminated-fucoidan has demonstrated enhanced endothelialization and reduced thrombogenicity. 35 The potential of STMP-PVA, including versions treated with RIP, to fulfill the material requirements for vascular grafts has been explored in numerous investigations, including studies conducted in nonhuman primate models. 34 − 37 These studies indicate that RIP-treated PVA surfaces exhibit a higher affinity for ECs and a decreased accumulation of platelets and fibrinogen, particularly when compared to ePTFE grafts in thrombosis models without anticoagulation. Nonetheless, the long-term stability of the beneficial properties conferred to PVA by RIP treatment has yet to be characterized, a gap that is crucial to address for the clinical application of such promising biomaterials. In the present study, we investigated the impact of RIP treatment on STMP-PVA using a selection of three precursor gases and two levels of RF power. The effects were monitored at two distinct time points: shortly after treatment within 14 days and at a prolonged interval of 230 days. The experimental conditions are depicted in Figure 1 . To evaluate the changes induced by the RIP treatment, we employed X-ray photoelectron spectroscopy (XPS) for surface chemistry analysis, atomic force microscopy (AFM), and scanning electron microscopy (SEM) for topographical assessment. Furthermore, the potential for endothelialization was determined by measuring nuclear DNA 48 h following the seeding of ECFCs. We posited that a surface textured by RIP treatment, exhibiting both roughness and charge, would enhance endothelialization. However, we also anticipated that these surface modifications might weaken over time. To the best of our knowledge, this is the first study to assess the persistence of the effects of RIP treatment on the surface properties of PVA and its subsequent influence on endothelialization. The findings from this research should significantly inform the development of RIP-treated biomaterials, including medical devices that are sterilized by using cold plasmas.
Materials and Methods PVA Manufacturing STMP-PVA was manufactured as previously described. 38 Briefly, 15% (w/v) sodium trimetaphosphate (STMP, Sigma, St. Louis, MO) was added to aqueous PVA, followed by 30% (w/v) sodium hydroxide, and cured as films. A final concentration of 10% (w/v) aqueous PVA (Sigma, average MW 85–124 kDa, 87–89% hydrolyzed) was used for all hydrogel samples. Reactive Ion Plasma Treatment PVA samples were treated using a Plasma-Therm Batchtop VII apparatus (St. Petersburg, Florida). An RF power of 50 or 100 W with a DC bias of 370 V, pressure of 100 mTorr, and total gas flow rate of 50 sccm were used for all studies. Oxygen, nitrogen, or argon was used for the RIP treatments, and the samples were exposed to the RIP for 5 min. The sample nomenclature includes the type of precursor gas (Ar, N 2 , or O 2 ) and the RF power at which the sample was treated. For example, argon treated at 50 W would be referred to as Ar-50. Samples were considered “activated” and used for characterization within 14 days of exposure to plasma. “Aged” samples were sealed and stored for approximately 230 days until characterization because 230 days from the initial treatment date was shown, in a similar polymer, to be the duration for hydrophobic recovery. 39 Scanning Electron Microscopy The PVA samples were mounted in a dry state on conductive double-sided carbon tape along with colloidal graphite to minimize charging. The mounted samples were then sputter-coated with Au/Pd at a ratio of 60:40 to form a 5 nm film. Images were collected using an FEI QUANTA 3D dual-beam scanning electron microscope at up to 50 000× magnification at either a 45 or 90° angle to the surface. Atomic Force Microscopy Untreated and RIP-treated PVA samples were measured in air using a Bruker Dimension Fastscan Bio Icon AFM at <14 days and at 230 days after treatment. PVA samples were trimmed with surgical scissors and adhered to glass slides using double-sided tape. The measurements were performed in Peakforce Tapping (PFT) Mode with Fastscan-C probes (spring constant = 0.8 N/m; end radius = 5 nm) on a Fastscan scanner. Samples were documented with 5 μm scans at 0.5–1 Hz and 512 × 512 resolution with a PFT frequency of 1 kHz. The peak force set point, amplitude, and gain were set to the lowest values, which enabled consistent tracking of the sample topology without loss of fidelity. Data processing and roughness quantification were performed using NanoScope Analysis 2.0 (Bruker Nano Surfaces, Billerica, Massachusetts). Four R q values were determined for each of the aged and activated samples from distinct regions in the AFM scans. Prior to the roughness analysis, the scan data were flattened and plane fit, streak artifacts were removed, and a 3 × 3 median filter was applied to correct aberrant pixels resulting from noise. X-ray Photoelectron Spectroscopy XPS is a commonly used surface-sensitive technique that explores the chemical makeup of the top layer of a material up to a depth of 10 nm. 40 XPS was used to determine the elemental composition of PVA before and after the RIP treatment. A spot size of 100 μm in diameter was used along with an electron flood gun for charge neutralization. The spectra were collected on a Versaprobe II (Physical Electronics, Chanhassen, Minnesota) at a takeoff angle of 45° with a monochromatic Al Kα source. One spot for each of the three different samples was collected for each scan. High-resolution scans were taken with a step size of 0.1 eV and a pass energy of 40 eV. The binding energy scales were calibrated to the CH x peak at 285 eV in the C 1s region, with a linear background for peak quantification. Endothelial Colony-Forming Cell Isolation and Culture ECFCs were isolated from the peripheral blood of juvenile male baboons (Papio anubis) as previously described. 41 − 43 Briefly, 50 mL of blood was collected in a 7% citrate solution via venipuncture before layering the blood on top of Histopaque-1077 (Sigma, St. Louis, MO) in centrifuge tubes in a 1:1 ratio. The tubes were centrifuged for 30 min at 500 G without a brake to isolate the mononuclear cells. Once mononuclear cells were collected, the cell suspension was washed with Hank’s buffer salt solution (HBSS, HyClone, Logan, UT) before centrifugation at 500 G for 10 min to form a cell pellet. Cells were then counted, resuspended in VascuLife VEGF Endothelial Medium (Lifeline Cell Technology, Frederick, MD) supplemented with 20% fetal bovine serum, and seeded onto tissue culture plates at a density of 20 million cells per well. Cells were placed in an incubator, and the medium was changed daily for the first 7 days, followed by every 3 days. Endothelial cell outgrowth colonies were allowed to develop for 2–4 weeks before ECFCs were collected via CD31 positive recognition using magnetic Dynabeads (Invitrogen, Carlsbad, California) and frozen for long-term storage. Endothelial Colony-Forming Cell Quantification ECFCs were cultured on 8 mm PVA samples as described previously. 36 Quant-iT PicoGreen dsDNA Assay kits (Invitrogen, Carlsbad, CA) were used to quantify the number of cells present on the surface of the PVA punches 48 h after seeding and after each RIP treatment. PVA samples were first RIP-treated according to Section 2.2 and nontreated 48-well cell culture plates (Corning, Corning, NY) were coated with agarose prior to inserting PVA samples to block any contact between ECFCs and the bottom surface of the well after seeding. After allowing the ECFS to attach and proliferate for 48 h, the samples were washed thoroughly with PBS to remove any unattached cells and frozen overnight at −20 °C. The cells were then lysed with SDS, diluted in the TE buffer, and dsDNA was labeled using Quant-iT PicoGreen reagent (ThermoFisher, Waltham, MA). dsDNA from ECFC-seeded PVA was quantified as the fluorescence intensity from a standard curve of calf thymus DNA (Invitrogen, Carlsbad, California). Immunostaining of Endothelial Colony-Forming Cells Both activated and aged ECFC samples were subjected to an identical immunostaining process. The samples were first fixed in 48-well plates using 3.7% paraformaldehyde warmed to a physiological temperature. This was followed by a 10 min permeabilization phase with 0.1% TritonX-100. Image-iT FX Signal Enhancer (Invitrogen, Carlsbad, CA) was subsequently added to each well and given a 30 min incubation period to enhance fluorescence signal. F-actin was stained by administering Alexa Fluor 568 phalloidin (Invitrogen, Carlsbad, CA), which was diluted 1:200 in PBS, for 1 h. Next, nonspecific antibody binding was averted by blocking the wells with 10% goat serum in Buffer #1 for 30 min. Primary staining was carried out by adding VE-cadherin (Invitrogen, Carlsbad, CA), diluted 1:100 in PBS containing calcium and magnesium and 1% BSA, to each well and incubation for 1 h. Secondary staining was achieved using antimouse IgG1 Alexa Fluor 488, diluted 1:500 in PBS supplemented with calcium and magnesium. DAPI (Invitrogen, Carlsbad, CA) was then added at a dilution of 1:10 000 in PBS with calcium and magnesium and 1% BSA for a 5 min period for nuclear staining. Finally, the activated or aged PVA samples were delicately mounted onto glass slides using ProLong Gold Antifade (Invitrogen, Carlsbad, CA), to minimize photobleaching during subsequent microscopy. The samples were left undisturbed to cure at room temperature overnight in preparation for imaging. Percent Confluence Calculation To quantify the percent confluence of the ECFC cultures on the PVA samples, we performed the following calculations. Based on previous work, 36 the untreated samples were confirmed to have no adhesion; therefore, any detected dsDNA was considered the background for all measurements. The surface area of a single ECFC was assumed to be 2000 μm 2 and the cells were assumed to pack hexagonally. 44 − 46 A mass of 7 pg of DNA per cell was used to calculate the number of cells on the STMP-PVA surface. 47 First, the number of ECFCs per sample was measured by determining the total mass of dsDNA in the sample ( m total ) using the PicoGreen assay and dividing that value by the mass of dsDNA in a single ECFC ( m ECFC ). The number of ECFCs was then multiplied by the surface area of a single ECFC (SA ECFC ) and divided by the cell packing efficiency (ε). The result was a surface area representative of the total number of ECFCs on the sample (SA total ), which was then divided by the total surface area of the substrate (SA substrate ) to determine the % confluence. The value for the number of ECFCs on untreated samples was subtracted from the values for RIP-treated samples, as no cells were detected on the surface of untreated samples in this or previous studies using brightfield or fluorescence microscopy. Electrostatic Force Analysis Previous studies have shown that charged nanotopographic surfaces can exert considerable electrostatic forces on surrounding nanostructures. 48 The modified electrostatic model developed by Lekner et al., which describes two uniformly surfaced charged cylinders, 49 was used to gain insight into the forces that may influence the degradation mechanics of the RIP-treated PVA topography. In this model, two uniformly charged cylinders with uniform dimensions ( q a = q b , r a = r b ) separated by a distance of s were considered. The bicylindrical coordinates u and v are introduced and mapped to Cartesian coordinates x and y via hyperbolic functions where l is the scale length defined The force per unit length of the right-hand cylinder is given by To expand this expression, angle A and bicylindrical coordinate v were related Substituting these expressions into the electrostatic force equation yielded where C a = cosh u a , S a = sin h u a , and c = cos v . For uniformly charged structures, a three-term infinite series was derived to model the electrostatic force acting between cylinders where T n = e –nu a tan h nu a . Nanostructure dimensions were determined from SEM images taken normal to the sample surface by using a custom MATLAB script and used in the derived expression to produce electrostatic force estimates. Statistical Methods SPSS 28 was used for all of the statistical calculations. Differences in group means of ECFC attachment between RIP treatments for either activated or aged samples were determined by using analysis of variance (ANOVA). First, we tested for normality and outliers in the dsDNA data, which was our dependent variable, for each RIP treatment (level). All data were normally distributed as groups (activated or aged) as well as at each level, except for aged O 2 -100 which had a statistically significant value for Shapiro-Wilk of 0.31. However, ANOVA is robust to variations of normality, so no adjustments or transforms were performed on the raw data. Two outliers were removed from Ar-50 aged and the untreated aged samples. Among groups, for aged samples, all data had equal variance, and for activated samples Levene’s test gave a significant value of 0.21, indicating unequal variances among levels. However, ANOVA is generally considered robust to the heterogeneity of variance if the largest variance is not more than 4 times the smallest variance, which was true for the activated samples. Furthermore, the general effect of heterogeneity of variance is to make the ANOVA less efficient. Therefore, any significant effects reported are still reliable. The data were further analyzed with Tukey’s post hoc test to determine differences among levels compared to untreated samples within groups using a pairwise comparison. A p -value of < 0.05 was used to determine significance among levels within groups. Differences between groups within a level were tested using a one-sided paired t test. AFM results are reported as level means and standard deviations. Level means were compared pairwise using a one-sided paired t test and within groups using ANOVA with Tukey’s post hoc. All XPS data indicating the charged and uncharged nitrogen species were evaluated using a paired two-sided t test between groups and levels were not tested for XPS data.
Results Scanning Electron Microscopy Images Images of the untreated, activated, and aged samples collected by using SEM at 45° relative to the surface are shown in Figure 2 . Images of untreated PVA are shown in the top row. Images of activated PVA are in the left column and aged samples are in the right column. From top to bottom, the order of RIP treatments is Untreated, Ar-50, Ar-100, N 2 -50, N 2 -100, O 2 -50, and O 2 -100. The larger images have a 50 μm scale bar and were collected at 1000× magnification. Inset images have a 1 μm scale bar and were taken at 50 000× magnification. The underlying microscale porous features observed in the untreated PVA (top row) appear to remain in the activated samples, which also showed additional nanotopographic features and therefore hierarchical structures (left-hand column). The same microstructures are visible, but less pronounced in the aged samples, while the magnitude of the nanostructures diminished significantly or disappeared completely in the aged samples (right-hand column). Atomic Force Microscopy-Based Analysis of Surface Topography AFM was used to characterize the topographies of the untreated and RIP-treated PVA surfaces. Surface scans revealed differences in surface roughness, which varied according to the precursor gas and RF power. The results of topographical characterization are shown in Figure 3 . The root-mean-square roughness, R q , of the activated samples was approximately one order-of-magnitude greater than that of the aged samples of the same treatment. The observed differences in roughness for the recently activated samples required more scans of the surface to properly account for heterogeneity within and between the samples. In contrast, the aged samples exhibited surfaces that relaxed almost to the baseline roughness observed in untreated PVA, with relative homogeneity within and between the samples. Untreated and activated samples at both powers were found to be significantly different ( p < 0.05) and consistent across all precursor gases. The N 2 and Ar-activated samples exhibited similar levels of roughness at both RF powers, but the O 2 samples were consistently rougher at both powers. Morphologically, the activated and aged O 2 -100 samples appeared to be qualitatively different from those of the other treatments within their respective groups. Using a two-tailed t test, 100 W samples were determined to have significantly different R q values than those of the samples treated at 50 W ( p < 0.05). Increased roughness at higher RF powers was observed for both activated and aged samples. X-ray Photoelectron Spectroscopy Spectral Analysis of Surface Chemistry XPS was used to probe the changes in the surface chemistry of untreated, activated, and aged PVA. The N2-100 sample was representative of the chemical species observed in the other five samples and shown in Figure 4 . High-resolution scans collected in the C 1s region of the N2-100 sample revealed the presence of four species under the C 1s peak envelope. Untreated PVA contains two of the four species, C–C (285 eV) and C–O/C–N (286.5 eV). 50 Untreated PVA contains equal parts (50:50) of C–C and C–O; however, the C–C component likely contains adventitious carbon, which would explain the higher than expected C–C (285 eV) peak. High-resolution scans of the RIP-treated, activated, and aged samples in the C 1s region ( Figure 4 B,C) revealed the addition of two new species: an amide bond, N–C=O (288 eV), and a carboxylic acid group (289 eV). There was no peak in the high-resolution spectra in the N 1s region for untreated PVA, as shown in Figure 4 D. For activated PVA, the spectra in the N 1s region are shown in Figure 4 E and contain two distinct peaks at 400.2 and 401.9 eV. The first of the two peaks corresponds to C–N species, such as C–NH 2 51 − 54 or N–C=O, 54 − 58 which is consistent with the species observed after plasma exposure. 59 , 60 The second peak centered at 401.9 eV consists of charged quaternary nitrogen. 52 − 54 , 61 Although the charged functional groups likely include C–NH 3 + and C=NH 2 + , the overlap of the charged quaternary groups does not allow for the assignment of a single quaternary species. The N 1s spectra of the aged samples ( Figure 4 F) contained a single peak at 400.2 eV. The fractions of charged and uncharged species present in each treatment are shown in Figure 4 H–J. Quaternary or charged nitrogen species were observed only in the activated samples. The error bars representing the three different scans show a wide distribution of charged and uncharged species on the phosphors of the O 2 -50 and Ar-100 samples. Percent Confluence of Endothelial Colony-Forming Cell Cultures The percent confluence of the ECFCs on plasma-treated PVA is shown in Figure 5 . The percent confluence of ECFCs attached to the activated samples for O 2 -50, O 2 -100, Ar-50, Ar-100, N 2 -50, and N 2 -100 was 42, 29, 75, 66, 16, and 49%, respectively. The percent confluence of ECFCs attached to aged samples for O 2 -50, O 2 -100, Ar-50, Ar-100, N 2 -50, and N 2 -100 was 11, 21, 58, 13, 14, and 28%, respectively. The activated and aged samples were found to be significantly different by ANOVA ( p < 0.05). The activated O 2 -100 and N 2 -50 samples were not significantly different from the untreated samples. In the aged group, only Ar-50 showed a significant increase in attachment compared to the untreated group. Electrostatic Force Model Figure 6 contains the details of the analytical model used to estimate the electrostatic forces across activated samples. The geometry used to calculate the forces is shown in Figure 6 A. The electrostatic force as a function of separation distance and pilar radius is shown in Figure 6 B. Figure 6 C,F shows the SEMs used to calculate the separation distance and pilar radius for Ar-50 and Ar-100, respectively. Figure 6 D shows the calculated values for electrostatic force between nanohairs normalized by the maximum force within the data set, the Ar-50 treatment. Based on the derived model, the groups exposed to nitrogen treatment exhibit force magnitudes that were approximately 30% lower in comparison to the Ar-50 group. Additionally, the Ar-100, O 2 –50, and O 2 –100 samples have significantly reduced electrostatic forces of approximately 60, 70, and 80%, respectively.
Discussion Endothelial Colony-Forming Cell Coverage Regardless of the precursor gas, RIP treatment of PVA was shown to support ECFCs at 48 h compared with untreated PVA and to different degrees for different powers and precursors. The percent confluence of ECFCs of active vs aged samples supports the existence of the shelf life of the RIP-treated material. Although RIP-treated PVA has been shown to support cell attachment, spreading, and proliferation, 36 , 62 to the best of our knowledge, this is the first study to demonstrate time-dependent changes in the ECFC attachment to RIP-treated PVA. This was demonstrated by comparing the activated samples, which showed increased endothelialization, with their corresponding aged samples, which adhered significantly fewer cells, except the O 2 -100 and N 2 -50 samples. These data suggest that there is a period after treatment in which the material can successfully facilitate ECFC attachment. Adherence of endothelial cells to the material is critical for endothelialization and, therefore, for its long-term performance and clinical acceptance as a potential candidate for SVGs. It is well established that treating polymers with RIP leads to changes in the surface chemistry of the materials as well as changes in roughness, surface energy, and wettability. 63 − 65 It is difficult to determine whether any of these changes in the material properties alone are responsible for the increased cell affinity. ECFC viability after attachment depends on specific cell-surface integrin interactions with ECM proteins along with the associated signaling networks and is therefore not likely to be reducible to a single determinant surface property. However, alteration of the surface chemistry or roughness of a biomaterial can lead to changes in its ability to adsorb ECM proteins from the surrounding microenvironment or encourage a cell to deposit its own matrix proteins. 63 Cells can interact directly with a surface briefly through weak interactions in the absence of ECM proteins but will undergo apoptosis within 24 h if vital inter- and intracellular signaling has not been initiated. 63 , 64 Cell viability studies, expression of adhesion molecules, measurements of deposited proteins, and characterization of EC behavior such as attachment, migration, and proliferation should be performed in the future to isolate the specific binding mechanism of ECFCs to RIP-treated surfaces and their phenotypic consequences as well as to determine the effect of both activation by RIP and aging on the endothelialization potential of PVA. Additionally, Future studies could consider the use of SEM images of endothelialized samples to further explore the colocalization of geometric features on RIP-treated PVA and cellular substructures such as lamellipodia, filipodia, and focal adhesions. PVA Surface Roughness The RIP treatments had a significant impact on the surface roughness at each power level compared to the untreated sample. This was most prominent for the O 2 treatments, which might have resulted in an increased interaction between the plasma and native oxygen within the polymer. The decrease in the surface roughness over time to near-baseline levels for the aged samples can be explained by hydrophobic relaxation, which results from a thermodynamically unstable surface composed of hydrophilic groups that are entropically driven into the bulk of the polymer. Relaxation reduces the roughness of the surface and shifts the chemical groups initially formed on the surface to the bulk of the material. This outcome has implications for the broader use of plasma-treated polymers in medicine, as it supports the idea of shelf life. The shelf life of RIP-treated polymers designed to encourage endothelialization represents a period during which the polymer actually promotes EC adherence. During this period, the treated polymer has rougher surface topography and functions differently than samples treated with RIP but given time to age. This idea is important for the use of PVA as a vascular graft material and its endothelializability. Most likely, other important characteristics of biomaterials such as their thrombogenicity will change as a roughened and charged surface relaxes, although that was not specifically tested in this study. Charged Functional Groups The disappearance of the peak at 401.9 eV in the aged sample indicated that the quaternary species previously observed in the activated samples were no longer present, suggesting that at some point between the initial treatment and the data collection that occurred 230 days after the treatment, the charged species were neutralized or transported off the surface. The results from the XPS high-resolution data in the N 1s region for N 2 -100 are representative of the other plasma treatments in terms of observable charged species. The only observed charged species were the quaternary nitrogen groups on the surface. It has been shown in the literature that nitrogen is not incorporated beyond the surface of RIP-treated polymers. 27 , 66 The presence and duration of charged nitrogen species indicate the period in which the polymer is in its optimal state to facilitate endothelialization. More studies may reveal the optimal length of time and storage conditions for such materials to be ideal for use as vascular grafts to enable the greatest potential for endothelialization. The charged species observed in all treatments were found in only activated samples. No quaternary nitrogen was found in aged samples across the different treatments. This finding supports the idea that charged species are neutralized over time and contribute to the adhesion of ECFCs either through the direct binding of integrins or through the electrostatic attraction of binding proteins. Nanostructures The formation mechanism of the observed nanostructures shown in Figures 2 and 3 can be explained by ion etching of the insulating PVA. Previous work has shown that ion etching of an insulating polymer that considers charging leads to the formation of high-aspect-ratio features. 67 This work suggests that the impinging ions are deflected from the peaks and enhance the etch rate of the sidewalls of the features to form high-aspect-ratio structures, which are described as nanohairs produced by anisotropic etching. 68 , 69 The nanohairs were confirmed in the AFM scans and are shown in Figure 3 . However, the structure was more easily observed in the SEM images in Figure 2 , which highlights the changes in the surface at the nano- and microscales for untreated PVA followed by RIP treatment. SEM images also provided evidence of material ablation and redeposition in the O 2 -50 and O 2 -100 samples, which is a phenomenon known to occur when RIPs are generated using DC bias. 70 Therefore, redeposition is another important phenomenon to consider when designing RIP-treated vascular graft materials in addition to hydrophobic recovery, nanotopographic relaxation, and quaternary nitrogen neutralization or transport. The interplay of near-field and electrostatic forces is known to significantly influence the geometric features at the nanoscale, especially in the context of charged surfaces. 49 In our study, these forces might have played a substantial role in the observed degradation mechanics of nanotopography across all RIP treatments, especially with the gradual dissipation of quaternary species over time. To further understand these dynamics, we turned to Whipple’s modified model, which describes the electrostatic forces between two uniformly charged cylinders. When applied to our RIP-treated PVA samples, this model suggested that Ar-50, Ar-100, N 2 -50, and N 2 -100 treatments could generate the most significant repulsive electrostatic forces between nanohairs. This is primarily due to their respective widths and separation distances. Notably, our model predicted significant differences in electrostatic forces among the activated samples, particularly within the O 2 -50 and O 2 -100 groups. These findings underscore the extent to which different RIP treatments can influence surface topographies and, thereby, modulate the electrostatic interactions between nanostructures. They also provide crucial insights into how the geometry might be involved in the degradation kinetics of a roughened surface. For instance, we observed that PVA subjected to N 2 and Ar RIPs demonstrated reduced relaxation over time. This suggests a potential correlation between the treatment-induced geometry and certain mechanisms that slow surface topography degradation. Although our model’s main utility lies in suggesting a mechanism for the observed degradation of nanohairs on RIP-treated PVA, it also allows us to establish an indirect link between near-field electrostatic forces and endothelialization. This is achieved by probing the potential effects of these forces on the degradation of nanofeatures, which are known to influence endothelialization. 17 Nonetheless, this is a complex area that warrants further investigation, both to clarify the underlying mechanisms contributing to these observed differences in force magnitudes and to understand how these forces may correlate with the instability of surface topographies.
Conclusions This study characterized the effect of RIP treatment and storage time on STMP cross-linked PVA for potential use as small-diameter synthetic vascular graft materials for treating cardiovascular disease. RIP treatment introduces nanohairs and charged nitrogen species onto PVA which enhances its endothelializability. AFM and SEM analyses of treated surfaces indicated a rougher surface in the activated samples after plasma treatment that relaxed to a smoother surface over time. SEM images showed the presence of high-aspect-ratio features, which were the main contributors to the surface roughness observed in AFM, as well as the redeposition of ablated PVA in samples treated with O 2 RIPs. XPS revealed the addition of new charged functional groups during the treatment, which disappeared over time. We proposed a model that suggests that the charge of RIP-treated PVA affects the physical degradation of nanohairs and vice versa. The increase in the endothelialization potential of activated samples compared to untreated PVA correlates with a rougher surface and the presence of charged functional groups. After aging the samples for 230 days, the smoother and less charged surfaces exhibited a decrease in endothelialization compared with the recently activated surfaces and untreated PVA. The chemical and physical changes in the PVA surface resulting from RIP treatment are known to promote tissue regeneration and previous studies have shown that PVA treated with N 2 -100 plasma is less thrombogenic than the current clinical standard synthetic vascular graft material: ePTFE. However, our results also suggest that RIP modifications of PVA are not permanent and appear to relax over time. If this time-dependent phenomenon is generally applicable to plasma-treated polymers, as this study and the literature suggest, it could have major implications for the broader usage of polymeric materials that receive RIP treatment, including the durability of plasma modifications intended to sterilize medical devices, improve biocompatibility, or improve cell adhesion.
Synthetic small-diameter vascular grafts (<6 mm) are used in the treatment of cardiovascular diseases, including coronary artery disease, but fail much more readily than similar grafts made from autologous vascular tissue. A promising approach to improve the patency rates of synthetic vascular grafts is to promote the adhesion of endothelial cells to the luminal surface of the graft. In this study, we characterized the surface chemical and topographic changes imparted on poly(vinyl alcohol) (PVA), an emerging hydrogel vascular graft material, after exposure to various reactive ion plasma (RIP) surface treatments, how these changes dissipate after storage in a sealed environment at standard temperature and pressure, and the effect of these changes on the adhesion of endothelial colony-forming cells (ECFCs). We showed that RIP treatments including O 2 , N 2 , or Ar at two radiofrequency powers, 50 and 100 W, improved ECFC adhesion compared to untreated PVA and to different degrees for each RIP treatment, but that the topographic and chemical changes responsible for the increased cell affinity dissipate in samples treated and allowed to age for 230 days. We characterized the effect of aging on RIP-treated PVA using an assay to quantify ECFCs on RIP-treated PVA 48 h after seeding, atomic force microscopy to probe surface topography, scanning electron microscopy to visualize surface modifications, and X-ray photoelectron spectroscopy to investigate surface chemistry. Our results show that after treatment at higher RF powers, the surface exhibits increased roughness and greater levels of charged nitrogen species across all precursor gases and that these surface modifications are beneficial for the attachment of ECFCs. This study is important for our understanding of the stability of surface modifications used to promote the adhesion of vascular cells such as ECFCs.
Data Availability Statement The data sets generated for this manuscript will be made available upon reasonable request. Author Contributions R.A.F. designed and performed the experiments and contributed to data analysis, figure preparation, and writing of the manuscript. N.M.B. contributed to the design of the experiments, figure preparation, and manuscript writing. J.S.P. designed and performed the experiments and contributed to data analysis, figure preparation, and writing of the manuscript. C.L. contributed to data analyses, figure preparation, numerical modeling, and writing of the manuscript. G.A.M. contributed to data analyses, figure preparation, numerical modeling, and writing of the manuscript. M.T.H. contributed to the design of the experiments, data analysis, writing of the manuscript, and project guidance. J.E.B. contributed to the data analysis, figure preparation, writing of the manuscript, and project guidance. P.L.J. conceived the project, designed and performed the experiments, and contributed to data analysis, figure preparation, numerical modeling, writing of the manuscript, and provided project guidance. The research reported in this publication was supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number SC2GM140991, National Institutes of Health award numbers R01 HL144113 and R21 HD096301, and California State University Program for Education & Research in Biotechnology (CSUPERB: NI-2023). The authors declare no competing financial interest. Notes This study does not perform any experiments on animals. All collection of primary cell samples took place at the Oregon National Primate Research Center (ONPRC) and approved by the Institutional Animal Care and Use Committee when appropriate. Baboons were cared for at ONPRC according to the “Guide to the Care and Use of Laboratory Animals” prepared by the Committee on Care & Use of Laboratory Animals of the Institute of Laboratory Animal Resources, National Research Council (International Standard Book, Number 0-309-05377-3, 1996, the United States).
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 20; 16(1):389-400
oa_package/94/bc/PMC10788828.tar.gz
PMC10788829
38118131
Introduction Fast and reliable detection of glucose is of great scientific and technological importance both in healthcare and in industrial analytical applications. The measuring of glucose concentration is important in the food and biotechnology industries as well as in clinic diagnostics, where monitoring glucose levels plays a critical role in treating diabetic patients. The most common glucose sensors use an electric current response from an electrode covered with a glucose oxidation catalyst. When an appropriate electric potential is applied to the electrode, the catalytic oxidation of glucose results in a measurable current that can be correlated to the glucose concentration. Traditionally, the glucose oxidation catalyst is an immobilized enzyme, but there is growing interest in developing nonenzymatic sensors. 1 Enzymatic glucose sensors suffer from drawbacks due to denaturization of enzymes, which can occur due to variations in temperature, pH, and humidity. 2 − 4 Nonenzymatic sensors have been developed using a wide range of electrocatalysts. 5 − 11 The main advantage of nonenzymatic sensors is their robust stability under a range of storage and operating conditions that would destroy enzymes. Depending on how they are constructed, nonenzymatic sensors have the potential to also be of much lower cost than enzymatic sensors. The detection of glucose using nonenzymatic metal-based sensors via direct oxidation has its own set of challenges, however. Since the most important application is in glucose monitoring for diabetes treatment, the nonenzymatic sensors should ideally work on biological samples, such as blood plasma or whole blood. In addition, the sensors should provide accurate measurements for patients whose blood glucose level falls outside the normal range of 4.4–6.6 mM. 12 Blood sugar concentration for healthy people ranges between 4.0 and 5.4 mM in fasting and up to 7.8 mM postprandial. Prediabetes is when values are between 5.5 and 6.9 mM in fasting and between 7.8 and 11.0 mM postprandial. Blood sugar levels are >7.0 mM in fasting and >11.1 mM postprandial for people suffering from diabetes. Accuracy over a wide range of glucose concentration is required as well as performance in the presence of numerous other biochemical compounds found in blood. Some of the earliest nonenzymatic sensors used precious metal-based electrocatalysts. Besides being expensive, they have limited sensitivity and selectivity, surface poisoning from adsorbed intermediates, and interference from chloride ions. 7 , 13 There is ongoing development of transition metal-based glucose sensors, with particular focus on nickel- and cobalt-based catalysts, to overcome the limitations of precious metal catalysts. 1 Transition-metal catalysts have been created that have good stability, low detection limit, fast response, high sensitivity, and low cost. 14 − 17 The lower selectivity of metal-based sensors with respect to glucose oxidase enzyme-based sensors is found to be offset by engineering the morphology and surface of the electrode. Here, we report on the development of novel bimetallic catalysts synthesized directly on the surface of a titanium sensor electrode. By carrying out short electrochemical reduction reactions, metal nanoparticles can be uniformly deposited on the electrode surface. A second electrochemical reduction reaction can then be carried out that deposits nanoparticles of a different type of metal on top of the first. The bimetallic film that is formed is composed of two types of transition-metal nanoparticles deposited separately and not an alloy. Nickel was chosen for the first layer deposition because it is one of the most widely studied glucose oxidation catalysts. 1 , 18 For the second layer, copper and silver were chosen for investigation because of their known performance in catalyzing glucose oxidation alone or when coupled with other metals. 9 , 18 − 20 The fabricated bimetallic thin film electrodes have high surface area and higher catalytic active sites and were found to have excellent electrochemical properties for use in glucose sensing.
Results and Discussion Morphology and Composition of Coatings The morphology and size of the nanoparticles in the single transition metal-based film or bimetallic composite film were studied in detail using scanning electron microscopy (SEM). The three samples, namely, Ni, Cu@Ni, and Ag@Ni, were synthesized using the previously described electrolytic deposition process at a constant current density. In all the experiments, a 25 mm × 25 mm sized platinum plate was used as the anode and an 8 mm × 8 mm sized titanium plate was used as the cathode. All of the electrochemical deposition reactions were carried out in a two-electrode system. Crystalline coatings of Ni with nanosphere-shaped particles were fabricated at varying reaction times, including, 2, 4, 6, 8, and 12 min at a reaction temperature of 95 °C under constant current density of 62.5 mA/cm 2 . As seen in Figure 1 , the size and distribution of the Ni nanoparticles varied with the reaction time. At the highest reaction time of 12 min, larger Ni crystals were formed sometimes overlapping each other with an increasing tendency of agglomeration at irregular spots on the surface of the coatings. At the lowest reaction time of 2 min, smaller distinct Ni crystals were formed which were spherical in the shape. The electrochemically deposited Ni crystals on the Ti plate acted as the cathode for a second stage deposition of Ag or Cu. Coatings of Ni nanocrystals deposited for shorter reaction times had void regions without Ni at some places on the surface of the electrode which reduced the effective surface area and catalytic activity toward glucose molecules. At higher reaction times, electrodeposited Ni nanocrystals tended to agglomerate with materials being formed in the bulk, which reduced the effective outer surface area. Hence, for all of the second-stage deposition reactions, Ni coatings electrochemically deposited for a reaction time of 8 min on a Ti plate were chosen as the working electrode. The Ni coating deposited using the 8 min reaction time from here on will be termed sample Ni. Sample Ni was used as a cathode in the second-stage deposition of Cu or Ag crystals. The average diameter of Ni crystals in both the single metal and bimetallic samples was approximately 75 nm as measured from the SEM images. For the sample Cu@Ni as shown in Figure 2 , copper (Cu) nanoparticles electrodeposited on Ni crystals showed a tendency to agglomerate at the surface of Ni. At higher copper concentration, these agglomerates formed a single nanostructure, whereas at an optimized low Cu concentration distinct Cu nanocrystals were observed on the surface. The diameter of an individual Cu nanoparticle averaged 30 nm, whereas the size of these agglomerated Cu@Ni nanostructures varied from 100 to 250 nm. For the sample Ag@Ni as shown in Figure 3 , Ag nanoparticles were deposited uniformly and distinctly all over the Ni crystals. The diameter of the Ag nanoparticles averaged 20 nm. The compositions of the individual as well as composite metallic coatings were studied using electron dispersive X-ray spectroscopy (EDX). Elemental composition was measured at three different positions of the sample, and the data were averaged. The Ni concentration was 20 wt % averaged across all the samples, whereas the amount of Ag and Cu was found to be approximately 5 and 4 wt %, respectively. The average elemental composition along with standard deviation in measurement is shown in Table 1 . Additionally based on the individual elemental ion mapping images (the Supporting Information file), the transition-metal ions were found to be uniformly distributed on the surface of the Ti substrate. Six samples each of Ni, Cu@Ni, and Ag@Ni were synthesized under the same conditions used for the samples analyzed in Table 1 and then removed from the Ti surface using an ultrasonic bath for XRD analysis. Dry metallic powder was collected by briefly heating the ultrasonicated solution containing metal nanoparticles at 80 °C. Ultrasonication was used to effectively remove the coatings from the Ti substrate; however, the possibility of breaking the nanoparticles in further small fragments remained. As the nanoparticles were heated for drying, oxide peaks were recorded in the XRD spectra. The XRD spectra of all three samples are shown in Figure 4 . The diffraction patterns of Ni, Ag, CuO, and Cu matched the standard reference peaks with ICDD card numbers of 03-065-0380, 01-077-6577, 41-0254, and 01-071-4607, respectively. The crystallite size of the metal nanoparticles may be calculated using the Debye–Scherrer relation where D is the mean size of the crystalline domain, K is the dimensionless shape factor, λ is the X-ray wavelength, β is the line broadening at half the maximum intensity [full width at half-maximum(fwhm)], and θ is the Bragg angle. For the sample Ni as shown in Figure 4 a, strong Ni diffraction peaks were seen at 44.40, 51.74, and 76.15°. These three peaks represented Ni crystalline planes (111), (200), and (220), respectively. Calculating the Ni crystallite size using the Debye–Scherrer relation for the (111) peak gave a result of 28.8 nm which was much smaller than the nanoparticle size obtained from SEM images (75 nm). The difference in size is possibly due to nanoparticles being polycrystalline. For the sample Cu@Ni, the Ni peaks were seen at 44.46 and 51.80° as shown in Figure 4 c representing the crystalline planes (111) and (200), respectively. Strong diffraction peaks were also recorded at 38.10 and 64.95° due to the CuO crystalline planes of (111) and (022), respectively. Observation of CuO peaks instead of Cu is due to the heating of the samples at 80 °C. The (111) crystalline plane of Cu is located at 43.379° which is located near the (111) plane of Ni and the two peaks are probably convoluted, leading to the broadening of fwhm (β) for the peak located at 44.46°. Hence, the XRD spectrum peaks were studied as a qualitative measurement and not as a quantitative measurement of the crystallite size. Nanoparticle sizes were successfully measured from SEM images. In Figure 4 b for the sample Ag@Ni, the Ni peaks were observed at 44.42 and 51.81° representing the crystalline planes (111) and (200), respectively. Strong diffraction peaks were also recorded at 38.06, 64.82, and 77.85° representing the crystalline planes (111), (220), and (311), respectively, for the Ag nanoparticles. The (200) crystalline plane of Ag is located at 44.599° which is near to the (111) plane of Ni. Here also, there was a possibility that the two peaks were convoluted, leading to the broadening of fwhm (β) for the peak located at 44.42°. The chemical states of the metal atoms near the surface were investigated in detail using X-ray photoelectron spectroscopy (XPS) ( the Supporting Information file). The results confirmed the presence of metallic Ni in all three samples. Metallic copper was detected in the Cu@Ni sample, along with copper oxide. Metallic Ag was detected in the Ag@Ni sample. The XPS data confirm the XRD results. Glucose Oxidation: Electrocatalytic and Electrokinetic Activity The electrocatalytic activity of bare Ti, Ni, Cu@Ni, and Ag@Ni toward glucose oxidation was examined by CV in a 0.1 M NaOH aqueous solution at a scan rate of 10 mV/s. CV experiments were conducted between −0.6 and 0.7 V using glucose concentrations of 0, 1, and 3 mM. An additional CV was conducted for Ag@Ni between −0.6 and 1.1 V ( Figure 5 e) using glucose concentrations of 0, 1, and 3 mM to allow the completion of the anodic scan of the Ag@Ni electrocatalyst. As shown in Figure 5 a, bare Ti showed a small oxidation peak that was independent of the glucose concentration. Bare Ti was inert and acted as a control for all the reactions. All the three other samples showed oxidation peaks that increase with the glucose concentration, indicating catalytic activity due to glucose oxidation. The transition metal-based nonenzymatic glucose sensor depends on the transition metal/oxide surface getting activated in the presence of hydroxide ions in a basic environment to act as a catalyst for glucose oxidation. The three transition metals concerned in our study are nickel (Ni), copper (Cu), and silver (Ag), forming glucose sensor materials, namely, Ni, Cu@Ni, and Ag@Ni. Under the alkaline conditions of our experiment, metallic Ni in the presence of hydroxide ions is expected to get transformed to Ni(OH) 2 which further reacts with hydroxide ions to form nickel oxyhydroxide (NiOOH) at 0.54 V during the anodic scan. 18 , 22 This NiOOH intermediate generated acts as an electrocatalyst for the oxidation of glucose to gluconolactone and then itself gets reduced back to Ni(OH) 2 during the cathodic scan. 23 − 25 In regards to the catalyst Cu@Ni, copper (Cu) at atmospheric conditions gets oxidized to copper oxide (CuO). 26 Besides Ni transforming to Ni(OH) 2 , CuO reacts with the water to form Cu(OH) 2 which then further changes to CuOOH forming a combined electrocatalyst of CuOOH/NiOOH. Glucose gets oxidized to gluconolactone in the presence of CuOOH/NiOOH as the electrocatalyst. 27 , 28 In regard to the electrocatalyst Ag@Ni, glucose is oxidized in two steps in the presence of Ag-based electrocatalysts. As shown in Figure 5 e, during the anodic scan Ag nanoparticles in Ag@Ni (1 mM glucose curve) form Ag 2 O/AgOH in the presence of hydroxide ions during the first peak at 0.62 V and then Ag 2 O/AgOH changes to AgO during the second anodic peak at 0.80 V. 29 , 30 Similarly, Ni(OH) 2 forms the intermediate NiOOH. The AgO/NiOOH intermediate then acts as an electrocatalyst for the oxidation of glucose to gluconolactone which is further oxidized to gluconic acid. During the cathodic scan at peak locations of 0.34 and −0.05 V, Ag 2 O was regenerated from Ag and then Ag was again regenerated from Ag 2 O. 31 − 34 The peak identifications are consistent with the mechanism of silver nanoparticle-catalyzed glucose oxidation described by Poletti Papi et al. 29 In the presence of glucose, all three catalysts Ni, Cu@Ni, and Ag@Ni exhibited higher anodic redox peak current ( I pa ) and the observed redox anodic peak potential ( E pa ) resembled their respective redox units of Ni 3+ , Ni 3+ /Cu 3+ , and Ni 3+ /Ag 2+ , validating the effective participation of the active catalytic centers of the electroredox couples toward oxidation of glucose. The increase in the anodic current due to an increase of glucose concentrations from 0 to 3 mM establishes the effective electrocatalytic activity of all the three catalysts Ni, Cu@Ni, and Ag@Ni. 18 , 24 , 35 The high effective surface area of all the samples with 75, 30, and 20 nm sized nanocrystals (based on SEM images) was a major driving force behind the excellent electrocatalytic activity toward glucose oxidation. In this study, we used geometric area of the electrode for calculating the performance of the electrodes toward glucose oxidation. In future work, we measure the ECSA. Since, the electrodes were made of thin films of nanostructured transition metals, the ECSA is likely higher than the geometric surface area. In the case of bimetallic thin film deposition, due to the second set of nanoparticles deposited on the surface of Ni nanocrystals, the effective surface area is expected to increase and enhance the number of active catalytic sites. In a later discussion, it will be shown that the overall electrocatalytic performance of bimetallic catalysts exceeds that of Ni alone. To understand the electrokinetics of the catalytic glucose oxidation system for all the three samples Ni, Cu@Ni, and Ag@Ni, a set of CV measurements were recorded with the scanning speed ranging from 10 to 100 mV/s in increments of 10 mV/s in the presence of 1 and 3 mM glucose (the Supporting Information ). As expected, the redox peak current density increased with an increase in the scan rate. In addition, the peak-to-peak separation between oxidation and reduction peaks increased with the scan rate, and the ratio of oxidation/reduction real current was not unity. The CV results indicate that the reaction is partially irreversible or quasi-reversible. 36 , 37 The CV data at various scan rates were converted to a Randles–Sevcik plot of peak current versus the square root of the scan rate as shown in Figure 6 for each of the three catalytic electrodes. For a surface electrochemical reaction that is diffusion controlled, it is expected to follow the Randles–Sevcik equation where, D 0 is the diffusion coefficient of the analyte molecules, A is the area of the electrode, C 0 * is the concentration of the analyte molecules that diffuses, α is the transfer coefficient, and ν is the potential sweep speed. 38 The excellent linear fit of the data in Figure 6 is strong evidence that the electrocatalytic process is controlled by the diffusion of glucose molecules into the electrode/electrolyte interface. 39 − 42 For all three catalysts Ni, Cu@Ni, and Ag@Ni with an increasing glucose concentration from 0, 1, to 3 mM, the anodic peak current was also found to increase. For our diffusion-controlled process, as more glucose molecules are present in the solution in a high glucose concentration, higher number of glucose molecules diffuses to the electrocatalyst surface requiring a higher number of redox units of Ni 3+ , Ni 3+ /Cu 3+ , and Ni 3+ /Ag 2+ to act as electrocatalysts for glucose oxidation. From this increase in the anodic peak current with an increasing glucose concentration, we can conclude a successful formation of the redox units, like, Ni 3+ from Ni 2+ and participation of Ni 3+ in the catalytic oxidation of glucose. 43 , 44 Glucose Oxidation: Amperometric Measurement For glucose concentration measurement, amperometry is typically used where the electrode is held at a fixed potential, and current response is correlated to the glucose concentration. This requires amperometric calibration of the electrodes against solutions of a known glucose concentration. A series of amperometric measurements were taken to calibrate each of the three types of electrodes (the Supporting Information ). The applied fixed potential was +0.54, +0.62, and +0.62 V vs Ag/AgCl for Ni, Cu@Ni, and Ag@Ni. Before the addition of glucose to the NaOH solution to conduct amperometry experiments, the electrodes were stabilized in 0.1 M NaOH solution at the fixed activating potential for 10 min. A quasi-stationary current of 28.52 μA/cm 2 (±0.51 μA/cm 2 ), 79.77 μA/cm 2 (±8.75 μA/cm 2 ), and 275.76 μA/cm 2 (±14.92 μA/cm 2 ) was recorded for Ni, Cu@Ni, and Ag@Ni, respectively, at the end of the stabilization period at a zero glucose concentration. The quasi-stationary current was determined by averaging the current response in the last 30 s of the stabilization period. After stabilization and blank current measurement, the current response was measured in the same 60 mL of 0.1 M NaOH as a 120 mM glucose stock solution was added incrementally. After each addition of glucose, the solution was stirred for 40 s, followed by a 260 s stabilization period. The current response to a particular glucose concentration was obtained by averaging the current readings in the last 30 s of the stabilization period. Figure 7 shows the resulting calibration curve for each of the three types of electrodes with the glucose concentration in mM as the x axis and the current response in mA/cm 2 (based on geometric surface area of the electrode) as the y axis. The calibration curves in Figure 7 were used to calculate the limit of detection, sensitivity, and linear range of response. The linear response range was defined as the bracketed glucose concentration range for which a linear regression could accurately represent the glucose concentration versus the current response. Sensitivity was calculated as the slope of the linear regression. The detection limit was calculated using the formula: (3 × sd)/ S (where, sd is the standard deviation of the blank signal and S is the slope of the calibration curve). Table 2 compares the figures of merit for all three samples Ni, Cu@Ni, and Ag@Ni. The samples Ni, Ag@Ni, and Cu@Ni exhibited a linear range of 0.2–1.8, 0.2–6.4, and 0.2–12.2 mM, respectively, with a high coefficient of determination of 0.983, 0.995, and 0.995, respectively. Cu@Ni showed the highest sensitivity of 420 μA/(mM cm 2 ) and the lowest for Ni with a value of 110 μA/(mM cm 2 ). The detection limits for Ni, Cu@Ni, and Ag@Ni were 14, 62.5, and 140 μM, respectively. The linear range of response as well as the sensitivity improved significantly for the bimetallic samples Cu@Ni and Ag@Ni in comparison to Ni. The sensitivity of Ag@Ni and Cu@Ni was 3× and 4× higher than that of Ni due to higher active catalytic sites in the bimetallic samples. The linear range of response for Ni was 0.2–1.8 mM of the glucose concentration. The upper limit of the linear response range increased from 1.8 to 6.4 and 12.2 mM for Ag@Ni and Cu@Ni, respectively. The linear response of the Cu@Ni electrode spans the entire normal glucose concentration range in human blood, and the upper limit of 12.2 mM is high enough to encompass warning levels indicating prediabetic and diabetic conditions. Comparison to Other Nonenzymatic Sensors The literature review on nonenzymatic electrochemical glucose sensors by Hwang et al. shows that noble metal, nonprecious transition metal/metal oxides, and metal alloy/composite catalytic materials have been designed and developed for electrochemical glucose sensing. 15 Noble metal-based electrodes show a high linear range of response toward glucose oxidation covering the human blood sugar levels of 2–8 mM; however, they showed poor sensitivity and were easily poisoned by interfering molecules besides being expensive. The nonprecious transition-metal counterpart had good sensitivity as well as anti-interference properties. However, the majority of the transition metal-based glucose sensing electrodes have a linear range of response that does not sufficiently cover human blood glucose concentrations. Furthermore, extremely high sensitivity in the case of some of the transition metal/metal oxide electrodes was due to high surface area substrates in the form of foam or porous substrates. 15 In this work, a broad linear response of 0.2–12.2 mM was achieved with a sensitivity of 420 μA/(mM cm 2 ) for the sample Cu@Ni. In one of the research articles, Anu Prathap et al. prepared CuO nanoparticles in the presence of tartaric acid/citric acid/amino acid and achieved a linear range of response 0.9–16.0 mM for glucose oxidation. However, during the amperometric experiment, the author modified a Pt electrode with the prepared CuO. There was no discussion if the underlying Pt electrode played a role in the high linear range recorded. Additionally, the author reported a sensitivity of only 9.02 μA/mM. 45 Similarly, in another work, Subramanian et al. deposited rGO/Ni(OH) 2 composites on Au electrodes to get a linear range of response of 15 μM–30 mM with a sensitivity of 11.4 mA mM –1 cm –2 . However, here also there was no discussion if the underlying gold substrate played a role in the high value of figure of merits. 46 In another work by Zhang et al., the author started with a Cu–Zr–Ag ingot and developed metallic glass ribbons by melt spinning followed by dealloying and other procedures, like anodizing, to finally form nanowires on nanoporous substrates. No detailed discussion was carried out about the dimension of the nanoporous substrate and how that might impact the result of a linear range of glucose detection of up to 15 mM with a sensitivity of 1310 μA/(mM cm 2 ). 47 A glucose oxidase-based enzymatic glucose sensor, Gox/Au–ZnO/GCE, prepared by Fang et al. recorded a linear range of response of 1–20 mM with a sensitivity of 1.409 μA/mM. 48 Here also, in addition to enzyme, gold was used in the electrode. Jeong et al. prepared another enzymatic sensor, Gox/3D MoS 2 /graphene aerogel, with a linear range of glucose detection of 2–20 mM and a sensitivity of 3.36 μA/mM. 49 In a recent review, Sehit and Altintas tabulated the performance of an enzyme-based glucose sensor with a widest linear range of glucose detection reported as 0–25 mM and a highest value of sensitivity noted as 289 μA/(mM cm 2 ) for a different electrode material. 50 Additionally, a very recent review of copper-based glucose sensors reported figures of merits from the literature reports of 520+ sensors. 51 From this list, we find only eight studies that reported a higher upper limit of linear response range to the glucose concentration while maintaining a greater sensitivity than our best catalyst (Cu@Ni; linear response range: 0.2–12.2 mM and sensitivity: 420 μA/(mM cm 2 ). Our work also has the added advantage of a simple low-cost preparation of bimetallic catalysts and sensors. There were no expensive materials involved in the catalyst synthesis, and the resulting catalyst-coated solid titanium plate is used directly as the sensing electrode without requiring other materials to facilitate electrode transfer during the glucose oxidation reaction. Glucose Oxidation: Selectivity, Stability, and Reproducibility One of the important parameters to be considered for fabricating a sensor material for the catalytic oxidation of glucose is its ability to eliminate the interfering responses generated by the species with similar electroactivity as that of the target analyte. In this work, for the Ni, Cu@Ni, and Ag@Ni samples, the selectivity of glucose was tested in the presence of the common interferents, including ascorbic acid (AA), uric acid (UA), dopamine (DA), lactose, maltose, fructose, and galactose. The experiments were carried out at fixed applied potentials of +0.54, +0.62, and +0.62 V for Ni, Cu@Ni, and Ag@Ni, respectively. The changes in the current response after the addition of the glucose and interferent solutions were studied. After each addition of glucose or an interferent, the solution was stirred for 40 s followed by 260 s of the stabilization period. The current response to a particular chemical addition was obtained by averaging the current response readings in the last 30 s of the stabilization period. After the initial stabilization of 10 min, 1 mM glucose was added to 60 mL of 0.1 M NaOH followed by 0.2 mM of each of the interferents every 300 s, and at the end, another 2 mM glucose was added. From Figure 8 a, it was seen that for Ni, the current response after the addition of 1 mM glucose was 0.37 mA and the current increased to only 0.40 mA after addition of all the seven interferents with each constituent’s concentration being 0.2 mM. After the addition 2 mM glucose at the end there was no significant increase in the current response for the sample Ni. The electrode material sample Ni surface was positioned and deactivated in the presence of all the interfering chemicals and did not respond to the glucose molecules added at the end of the reaction. In the case of Cu@Ni, ( Figure 8 c) the current was stabilized to 0.45 mA after addition of the initial 1 mM glucose and the current increased at an average rate of 10% after the addition of each interferent. Further work on improving the selectivity of Cu@Ni needs to be pursued. Here, the Cu@Ni material electrode was fully functional even after adding all the interferents, and the current increased to 1.93 mA after the addition of 2 mM glucose at the end. The selectivity toward catalytic glucose oxidation for the sample Ag@Ni was recorded as shown in Figure 8 b. The initial current after 1 mM of glucose addition was measured as 0.62 mA and the average current increment after the addition of each of the interferents was only 2%. Ag@Ni was fully stable and functional until the end of the selectivity test and recorded an increased current response of 1.44 mA at the end after the addition of 2 mM glucose. The selectivity performance of Ag@Ni was attributed to the fine uniform deposition of Ag nanoparticles on the surface of Ni nanoparticles and to the individual material properties of Ni and Ag. The reproducibility of each of the electrode samples was tested by fabricating four replicates of each of Ni, Cu@Ni, and Ag@Ni and then measuring the anodic oxidation current at +0.54, +0.62, and +0.62 V, respectively, from the oxidation of 1 mM of glucose in 60 mL of 0.1 M NaOH solution. The mean anodic oxidation current was calculated, along with the standard deviation. The relative standard deviations for Ni, Cu@Ni, and Ag@Ni were 9.67, 6.47, and 5.12%, respectively. The stability of Ni nanospheres electrodeposited on the Ti surface as well as the stability of bimetallic nanocrystalline Cu@Ni and Ag@Ni after repeated CV experiments in the presence of 1 mM glucose in 0.1 M NaOH was studied using ICP-MS analysis. The ICP-MS measurements were conducted for the Ni, Cu, and Ag ions that leached out into NaOH solution after the repeated 25 CV cycles at 50 mV/s between −0.6 and 1.1 V. Three replicates of each of the samples were tested along with a control experiment where CV was done with bare Ti without any material deposited on its surface. Low amount of metal ions leached into the solution as shown in the Table 3 validating the stability of the materials deposited on the surface of Ti even after long experimental runs.
Results and Discussion Morphology and Composition of Coatings The morphology and size of the nanoparticles in the single transition metal-based film or bimetallic composite film were studied in detail using scanning electron microscopy (SEM). The three samples, namely, Ni, Cu@Ni, and Ag@Ni, were synthesized using the previously described electrolytic deposition process at a constant current density. In all the experiments, a 25 mm × 25 mm sized platinum plate was used as the anode and an 8 mm × 8 mm sized titanium plate was used as the cathode. All of the electrochemical deposition reactions were carried out in a two-electrode system. Crystalline coatings of Ni with nanosphere-shaped particles were fabricated at varying reaction times, including, 2, 4, 6, 8, and 12 min at a reaction temperature of 95 °C under constant current density of 62.5 mA/cm 2 . As seen in Figure 1 , the size and distribution of the Ni nanoparticles varied with the reaction time. At the highest reaction time of 12 min, larger Ni crystals were formed sometimes overlapping each other with an increasing tendency of agglomeration at irregular spots on the surface of the coatings. At the lowest reaction time of 2 min, smaller distinct Ni crystals were formed which were spherical in the shape. The electrochemically deposited Ni crystals on the Ti plate acted as the cathode for a second stage deposition of Ag or Cu. Coatings of Ni nanocrystals deposited for shorter reaction times had void regions without Ni at some places on the surface of the electrode which reduced the effective surface area and catalytic activity toward glucose molecules. At higher reaction times, electrodeposited Ni nanocrystals tended to agglomerate with materials being formed in the bulk, which reduced the effective outer surface area. Hence, for all of the second-stage deposition reactions, Ni coatings electrochemically deposited for a reaction time of 8 min on a Ti plate were chosen as the working electrode. The Ni coating deposited using the 8 min reaction time from here on will be termed sample Ni. Sample Ni was used as a cathode in the second-stage deposition of Cu or Ag crystals. The average diameter of Ni crystals in both the single metal and bimetallic samples was approximately 75 nm as measured from the SEM images. For the sample Cu@Ni as shown in Figure 2 , copper (Cu) nanoparticles electrodeposited on Ni crystals showed a tendency to agglomerate at the surface of Ni. At higher copper concentration, these agglomerates formed a single nanostructure, whereas at an optimized low Cu concentration distinct Cu nanocrystals were observed on the surface. The diameter of an individual Cu nanoparticle averaged 30 nm, whereas the size of these agglomerated Cu@Ni nanostructures varied from 100 to 250 nm. For the sample Ag@Ni as shown in Figure 3 , Ag nanoparticles were deposited uniformly and distinctly all over the Ni crystals. The diameter of the Ag nanoparticles averaged 20 nm. The compositions of the individual as well as composite metallic coatings were studied using electron dispersive X-ray spectroscopy (EDX). Elemental composition was measured at three different positions of the sample, and the data were averaged. The Ni concentration was 20 wt % averaged across all the samples, whereas the amount of Ag and Cu was found to be approximately 5 and 4 wt %, respectively. The average elemental composition along with standard deviation in measurement is shown in Table 1 . Additionally based on the individual elemental ion mapping images (the Supporting Information file), the transition-metal ions were found to be uniformly distributed on the surface of the Ti substrate. Six samples each of Ni, Cu@Ni, and Ag@Ni were synthesized under the same conditions used for the samples analyzed in Table 1 and then removed from the Ti surface using an ultrasonic bath for XRD analysis. Dry metallic powder was collected by briefly heating the ultrasonicated solution containing metal nanoparticles at 80 °C. Ultrasonication was used to effectively remove the coatings from the Ti substrate; however, the possibility of breaking the nanoparticles in further small fragments remained. As the nanoparticles were heated for drying, oxide peaks were recorded in the XRD spectra. The XRD spectra of all three samples are shown in Figure 4 . The diffraction patterns of Ni, Ag, CuO, and Cu matched the standard reference peaks with ICDD card numbers of 03-065-0380, 01-077-6577, 41-0254, and 01-071-4607, respectively. The crystallite size of the metal nanoparticles may be calculated using the Debye–Scherrer relation where D is the mean size of the crystalline domain, K is the dimensionless shape factor, λ is the X-ray wavelength, β is the line broadening at half the maximum intensity [full width at half-maximum(fwhm)], and θ is the Bragg angle. For the sample Ni as shown in Figure 4 a, strong Ni diffraction peaks were seen at 44.40, 51.74, and 76.15°. These three peaks represented Ni crystalline planes (111), (200), and (220), respectively. Calculating the Ni crystallite size using the Debye–Scherrer relation for the (111) peak gave a result of 28.8 nm which was much smaller than the nanoparticle size obtained from SEM images (75 nm). The difference in size is possibly due to nanoparticles being polycrystalline. For the sample Cu@Ni, the Ni peaks were seen at 44.46 and 51.80° as shown in Figure 4 c representing the crystalline planes (111) and (200), respectively. Strong diffraction peaks were also recorded at 38.10 and 64.95° due to the CuO crystalline planes of (111) and (022), respectively. Observation of CuO peaks instead of Cu is due to the heating of the samples at 80 °C. The (111) crystalline plane of Cu is located at 43.379° which is located near the (111) plane of Ni and the two peaks are probably convoluted, leading to the broadening of fwhm (β) for the peak located at 44.46°. Hence, the XRD spectrum peaks were studied as a qualitative measurement and not as a quantitative measurement of the crystallite size. Nanoparticle sizes were successfully measured from SEM images. In Figure 4 b for the sample Ag@Ni, the Ni peaks were observed at 44.42 and 51.81° representing the crystalline planes (111) and (200), respectively. Strong diffraction peaks were also recorded at 38.06, 64.82, and 77.85° representing the crystalline planes (111), (220), and (311), respectively, for the Ag nanoparticles. The (200) crystalline plane of Ag is located at 44.599° which is near to the (111) plane of Ni. Here also, there was a possibility that the two peaks were convoluted, leading to the broadening of fwhm (β) for the peak located at 44.42°. The chemical states of the metal atoms near the surface were investigated in detail using X-ray photoelectron spectroscopy (XPS) ( the Supporting Information file). The results confirmed the presence of metallic Ni in all three samples. Metallic copper was detected in the Cu@Ni sample, along with copper oxide. Metallic Ag was detected in the Ag@Ni sample. The XPS data confirm the XRD results. Glucose Oxidation: Electrocatalytic and Electrokinetic Activity The electrocatalytic activity of bare Ti, Ni, Cu@Ni, and Ag@Ni toward glucose oxidation was examined by CV in a 0.1 M NaOH aqueous solution at a scan rate of 10 mV/s. CV experiments were conducted between −0.6 and 0.7 V using glucose concentrations of 0, 1, and 3 mM. An additional CV was conducted for Ag@Ni between −0.6 and 1.1 V ( Figure 5 e) using glucose concentrations of 0, 1, and 3 mM to allow the completion of the anodic scan of the Ag@Ni electrocatalyst. As shown in Figure 5 a, bare Ti showed a small oxidation peak that was independent of the glucose concentration. Bare Ti was inert and acted as a control for all the reactions. All the three other samples showed oxidation peaks that increase with the glucose concentration, indicating catalytic activity due to glucose oxidation. The transition metal-based nonenzymatic glucose sensor depends on the transition metal/oxide surface getting activated in the presence of hydroxide ions in a basic environment to act as a catalyst for glucose oxidation. The three transition metals concerned in our study are nickel (Ni), copper (Cu), and silver (Ag), forming glucose sensor materials, namely, Ni, Cu@Ni, and Ag@Ni. Under the alkaline conditions of our experiment, metallic Ni in the presence of hydroxide ions is expected to get transformed to Ni(OH) 2 which further reacts with hydroxide ions to form nickel oxyhydroxide (NiOOH) at 0.54 V during the anodic scan. 18 , 22 This NiOOH intermediate generated acts as an electrocatalyst for the oxidation of glucose to gluconolactone and then itself gets reduced back to Ni(OH) 2 during the cathodic scan. 23 − 25 In regards to the catalyst Cu@Ni, copper (Cu) at atmospheric conditions gets oxidized to copper oxide (CuO). 26 Besides Ni transforming to Ni(OH) 2 , CuO reacts with the water to form Cu(OH) 2 which then further changes to CuOOH forming a combined electrocatalyst of CuOOH/NiOOH. Glucose gets oxidized to gluconolactone in the presence of CuOOH/NiOOH as the electrocatalyst. 27 , 28 In regard to the electrocatalyst Ag@Ni, glucose is oxidized in two steps in the presence of Ag-based electrocatalysts. As shown in Figure 5 e, during the anodic scan Ag nanoparticles in Ag@Ni (1 mM glucose curve) form Ag 2 O/AgOH in the presence of hydroxide ions during the first peak at 0.62 V and then Ag 2 O/AgOH changes to AgO during the second anodic peak at 0.80 V. 29 , 30 Similarly, Ni(OH) 2 forms the intermediate NiOOH. The AgO/NiOOH intermediate then acts as an electrocatalyst for the oxidation of glucose to gluconolactone which is further oxidized to gluconic acid. During the cathodic scan at peak locations of 0.34 and −0.05 V, Ag 2 O was regenerated from Ag and then Ag was again regenerated from Ag 2 O. 31 − 34 The peak identifications are consistent with the mechanism of silver nanoparticle-catalyzed glucose oxidation described by Poletti Papi et al. 29 In the presence of glucose, all three catalysts Ni, Cu@Ni, and Ag@Ni exhibited higher anodic redox peak current ( I pa ) and the observed redox anodic peak potential ( E pa ) resembled their respective redox units of Ni 3+ , Ni 3+ /Cu 3+ , and Ni 3+ /Ag 2+ , validating the effective participation of the active catalytic centers of the electroredox couples toward oxidation of glucose. The increase in the anodic current due to an increase of glucose concentrations from 0 to 3 mM establishes the effective electrocatalytic activity of all the three catalysts Ni, Cu@Ni, and Ag@Ni. 18 , 24 , 35 The high effective surface area of all the samples with 75, 30, and 20 nm sized nanocrystals (based on SEM images) was a major driving force behind the excellent electrocatalytic activity toward glucose oxidation. In this study, we used geometric area of the electrode for calculating the performance of the electrodes toward glucose oxidation. In future work, we measure the ECSA. Since, the electrodes were made of thin films of nanostructured transition metals, the ECSA is likely higher than the geometric surface area. In the case of bimetallic thin film deposition, due to the second set of nanoparticles deposited on the surface of Ni nanocrystals, the effective surface area is expected to increase and enhance the number of active catalytic sites. In a later discussion, it will be shown that the overall electrocatalytic performance of bimetallic catalysts exceeds that of Ni alone. To understand the electrokinetics of the catalytic glucose oxidation system for all the three samples Ni, Cu@Ni, and Ag@Ni, a set of CV measurements were recorded with the scanning speed ranging from 10 to 100 mV/s in increments of 10 mV/s in the presence of 1 and 3 mM glucose (the Supporting Information ). As expected, the redox peak current density increased with an increase in the scan rate. In addition, the peak-to-peak separation between oxidation and reduction peaks increased with the scan rate, and the ratio of oxidation/reduction real current was not unity. The CV results indicate that the reaction is partially irreversible or quasi-reversible. 36 , 37 The CV data at various scan rates were converted to a Randles–Sevcik plot of peak current versus the square root of the scan rate as shown in Figure 6 for each of the three catalytic electrodes. For a surface electrochemical reaction that is diffusion controlled, it is expected to follow the Randles–Sevcik equation where, D 0 is the diffusion coefficient of the analyte molecules, A is the area of the electrode, C 0 * is the concentration of the analyte molecules that diffuses, α is the transfer coefficient, and ν is the potential sweep speed. 38 The excellent linear fit of the data in Figure 6 is strong evidence that the electrocatalytic process is controlled by the diffusion of glucose molecules into the electrode/electrolyte interface. 39 − 42 For all three catalysts Ni, Cu@Ni, and Ag@Ni with an increasing glucose concentration from 0, 1, to 3 mM, the anodic peak current was also found to increase. For our diffusion-controlled process, as more glucose molecules are present in the solution in a high glucose concentration, higher number of glucose molecules diffuses to the electrocatalyst surface requiring a higher number of redox units of Ni 3+ , Ni 3+ /Cu 3+ , and Ni 3+ /Ag 2+ to act as electrocatalysts for glucose oxidation. From this increase in the anodic peak current with an increasing glucose concentration, we can conclude a successful formation of the redox units, like, Ni 3+ from Ni 2+ and participation of Ni 3+ in the catalytic oxidation of glucose. 43 , 44 Glucose Oxidation: Amperometric Measurement For glucose concentration measurement, amperometry is typically used where the electrode is held at a fixed potential, and current response is correlated to the glucose concentration. This requires amperometric calibration of the electrodes against solutions of a known glucose concentration. A series of amperometric measurements were taken to calibrate each of the three types of electrodes (the Supporting Information ). The applied fixed potential was +0.54, +0.62, and +0.62 V vs Ag/AgCl for Ni, Cu@Ni, and Ag@Ni. Before the addition of glucose to the NaOH solution to conduct amperometry experiments, the electrodes were stabilized in 0.1 M NaOH solution at the fixed activating potential for 10 min. A quasi-stationary current of 28.52 μA/cm 2 (±0.51 μA/cm 2 ), 79.77 μA/cm 2 (±8.75 μA/cm 2 ), and 275.76 μA/cm 2 (±14.92 μA/cm 2 ) was recorded for Ni, Cu@Ni, and Ag@Ni, respectively, at the end of the stabilization period at a zero glucose concentration. The quasi-stationary current was determined by averaging the current response in the last 30 s of the stabilization period. After stabilization and blank current measurement, the current response was measured in the same 60 mL of 0.1 M NaOH as a 120 mM glucose stock solution was added incrementally. After each addition of glucose, the solution was stirred for 40 s, followed by a 260 s stabilization period. The current response to a particular glucose concentration was obtained by averaging the current readings in the last 30 s of the stabilization period. Figure 7 shows the resulting calibration curve for each of the three types of electrodes with the glucose concentration in mM as the x axis and the current response in mA/cm 2 (based on geometric surface area of the electrode) as the y axis. The calibration curves in Figure 7 were used to calculate the limit of detection, sensitivity, and linear range of response. The linear response range was defined as the bracketed glucose concentration range for which a linear regression could accurately represent the glucose concentration versus the current response. Sensitivity was calculated as the slope of the linear regression. The detection limit was calculated using the formula: (3 × sd)/ S (where, sd is the standard deviation of the blank signal and S is the slope of the calibration curve). Table 2 compares the figures of merit for all three samples Ni, Cu@Ni, and Ag@Ni. The samples Ni, Ag@Ni, and Cu@Ni exhibited a linear range of 0.2–1.8, 0.2–6.4, and 0.2–12.2 mM, respectively, with a high coefficient of determination of 0.983, 0.995, and 0.995, respectively. Cu@Ni showed the highest sensitivity of 420 μA/(mM cm 2 ) and the lowest for Ni with a value of 110 μA/(mM cm 2 ). The detection limits for Ni, Cu@Ni, and Ag@Ni were 14, 62.5, and 140 μM, respectively. The linear range of response as well as the sensitivity improved significantly for the bimetallic samples Cu@Ni and Ag@Ni in comparison to Ni. The sensitivity of Ag@Ni and Cu@Ni was 3× and 4× higher than that of Ni due to higher active catalytic sites in the bimetallic samples. The linear range of response for Ni was 0.2–1.8 mM of the glucose concentration. The upper limit of the linear response range increased from 1.8 to 6.4 and 12.2 mM for Ag@Ni and Cu@Ni, respectively. The linear response of the Cu@Ni electrode spans the entire normal glucose concentration range in human blood, and the upper limit of 12.2 mM is high enough to encompass warning levels indicating prediabetic and diabetic conditions. Comparison to Other Nonenzymatic Sensors The literature review on nonenzymatic electrochemical glucose sensors by Hwang et al. shows that noble metal, nonprecious transition metal/metal oxides, and metal alloy/composite catalytic materials have been designed and developed for electrochemical glucose sensing. 15 Noble metal-based electrodes show a high linear range of response toward glucose oxidation covering the human blood sugar levels of 2–8 mM; however, they showed poor sensitivity and were easily poisoned by interfering molecules besides being expensive. The nonprecious transition-metal counterpart had good sensitivity as well as anti-interference properties. However, the majority of the transition metal-based glucose sensing electrodes have a linear range of response that does not sufficiently cover human blood glucose concentrations. Furthermore, extremely high sensitivity in the case of some of the transition metal/metal oxide electrodes was due to high surface area substrates in the form of foam or porous substrates. 15 In this work, a broad linear response of 0.2–12.2 mM was achieved with a sensitivity of 420 μA/(mM cm 2 ) for the sample Cu@Ni. In one of the research articles, Anu Prathap et al. prepared CuO nanoparticles in the presence of tartaric acid/citric acid/amino acid and achieved a linear range of response 0.9–16.0 mM for glucose oxidation. However, during the amperometric experiment, the author modified a Pt electrode with the prepared CuO. There was no discussion if the underlying Pt electrode played a role in the high linear range recorded. Additionally, the author reported a sensitivity of only 9.02 μA/mM. 45 Similarly, in another work, Subramanian et al. deposited rGO/Ni(OH) 2 composites on Au electrodes to get a linear range of response of 15 μM–30 mM with a sensitivity of 11.4 mA mM –1 cm –2 . However, here also there was no discussion if the underlying gold substrate played a role in the high value of figure of merits. 46 In another work by Zhang et al., the author started with a Cu–Zr–Ag ingot and developed metallic glass ribbons by melt spinning followed by dealloying and other procedures, like anodizing, to finally form nanowires on nanoporous substrates. No detailed discussion was carried out about the dimension of the nanoporous substrate and how that might impact the result of a linear range of glucose detection of up to 15 mM with a sensitivity of 1310 μA/(mM cm 2 ). 47 A glucose oxidase-based enzymatic glucose sensor, Gox/Au–ZnO/GCE, prepared by Fang et al. recorded a linear range of response of 1–20 mM with a sensitivity of 1.409 μA/mM. 48 Here also, in addition to enzyme, gold was used in the electrode. Jeong et al. prepared another enzymatic sensor, Gox/3D MoS 2 /graphene aerogel, with a linear range of glucose detection of 2–20 mM and a sensitivity of 3.36 μA/mM. 49 In a recent review, Sehit and Altintas tabulated the performance of an enzyme-based glucose sensor with a widest linear range of glucose detection reported as 0–25 mM and a highest value of sensitivity noted as 289 μA/(mM cm 2 ) for a different electrode material. 50 Additionally, a very recent review of copper-based glucose sensors reported figures of merits from the literature reports of 520+ sensors. 51 From this list, we find only eight studies that reported a higher upper limit of linear response range to the glucose concentration while maintaining a greater sensitivity than our best catalyst (Cu@Ni; linear response range: 0.2–12.2 mM and sensitivity: 420 μA/(mM cm 2 ). Our work also has the added advantage of a simple low-cost preparation of bimetallic catalysts and sensors. There were no expensive materials involved in the catalyst synthesis, and the resulting catalyst-coated solid titanium plate is used directly as the sensing electrode without requiring other materials to facilitate electrode transfer during the glucose oxidation reaction. Glucose Oxidation: Selectivity, Stability, and Reproducibility One of the important parameters to be considered for fabricating a sensor material for the catalytic oxidation of glucose is its ability to eliminate the interfering responses generated by the species with similar electroactivity as that of the target analyte. In this work, for the Ni, Cu@Ni, and Ag@Ni samples, the selectivity of glucose was tested in the presence of the common interferents, including ascorbic acid (AA), uric acid (UA), dopamine (DA), lactose, maltose, fructose, and galactose. The experiments were carried out at fixed applied potentials of +0.54, +0.62, and +0.62 V for Ni, Cu@Ni, and Ag@Ni, respectively. The changes in the current response after the addition of the glucose and interferent solutions were studied. After each addition of glucose or an interferent, the solution was stirred for 40 s followed by 260 s of the stabilization period. The current response to a particular chemical addition was obtained by averaging the current response readings in the last 30 s of the stabilization period. After the initial stabilization of 10 min, 1 mM glucose was added to 60 mL of 0.1 M NaOH followed by 0.2 mM of each of the interferents every 300 s, and at the end, another 2 mM glucose was added. From Figure 8 a, it was seen that for Ni, the current response after the addition of 1 mM glucose was 0.37 mA and the current increased to only 0.40 mA after addition of all the seven interferents with each constituent’s concentration being 0.2 mM. After the addition 2 mM glucose at the end there was no significant increase in the current response for the sample Ni. The electrode material sample Ni surface was positioned and deactivated in the presence of all the interfering chemicals and did not respond to the glucose molecules added at the end of the reaction. In the case of Cu@Ni, ( Figure 8 c) the current was stabilized to 0.45 mA after addition of the initial 1 mM glucose and the current increased at an average rate of 10% after the addition of each interferent. Further work on improving the selectivity of Cu@Ni needs to be pursued. Here, the Cu@Ni material electrode was fully functional even after adding all the interferents, and the current increased to 1.93 mA after the addition of 2 mM glucose at the end. The selectivity toward catalytic glucose oxidation for the sample Ag@Ni was recorded as shown in Figure 8 b. The initial current after 1 mM of glucose addition was measured as 0.62 mA and the average current increment after the addition of each of the interferents was only 2%. Ag@Ni was fully stable and functional until the end of the selectivity test and recorded an increased current response of 1.44 mA at the end after the addition of 2 mM glucose. The selectivity performance of Ag@Ni was attributed to the fine uniform deposition of Ag nanoparticles on the surface of Ni nanoparticles and to the individual material properties of Ni and Ag. The reproducibility of each of the electrode samples was tested by fabricating four replicates of each of Ni, Cu@Ni, and Ag@Ni and then measuring the anodic oxidation current at +0.54, +0.62, and +0.62 V, respectively, from the oxidation of 1 mM of glucose in 60 mL of 0.1 M NaOH solution. The mean anodic oxidation current was calculated, along with the standard deviation. The relative standard deviations for Ni, Cu@Ni, and Ag@Ni were 9.67, 6.47, and 5.12%, respectively. The stability of Ni nanospheres electrodeposited on the Ti surface as well as the stability of bimetallic nanocrystalline Cu@Ni and Ag@Ni after repeated CV experiments in the presence of 1 mM glucose in 0.1 M NaOH was studied using ICP-MS analysis. The ICP-MS measurements were conducted for the Ni, Cu, and Ag ions that leached out into NaOH solution after the repeated 25 CV cycles at 50 mV/s between −0.6 and 1.1 V. Three replicates of each of the samples were tested along with a control experiment where CV was done with bare Ti without any material deposited on its surface. Low amount of metal ions leached into the solution as shown in the Table 3 validating the stability of the materials deposited on the surface of Ti even after long experimental runs.
Conclusions Electrochemical deposition, being a clean, easy-to-operate, and low-cost method, was effectively used to fabricate nanocrystals of nickel as well as composite nanostructures of nickel–copper and nickel–silver. A multistage electrochemical deposition technique was optimized to deposit 20 nm sized Ag nanoparticles uniformly on 75 nm sized Ni crystals. This method of two-stage electrodeposition was replicated for other metals, thereby depositing 30 nm sized Cu nanostructures on the Ni nanospheres. The materials prepared in this work were tested for their catalytic activity, sensitivity, selectivity, and linear range of response toward glucose oxidation. On all of the figures of merit, Cu@Ni and Ag@Ni excelled in comparison to the nanocrystalline Ni alone. The combined participation of the bimetallic material promoted higher catalytic surface area and better electron transportation leading to a wide linear range of amperometric response as well as high sensitivity. Cu@Ni recorded a wide 0.2–12.2 mM linear range of glucose concentration detection with the best sensitivity of 420 μA/(mM cm 2 ) among all the three electrode materials. The linear range of response for glucose detection for both Cu@Ni and Ag@Ni covered the expected glucose concentration found in a normal human blood sample. Ag@Ni showed the best selectivity toward glucose oxidation in the presence of interferents with only an average 2% increase in current response after each addition of each interferent. All three variants were reproducible and were stable on the surface of titanium after extended reactions. The performance of the transition-metal composites Ag@Ni and Cu@Ni developed in this work was better than the existing literature on the transition metal-based nonenzymatic glucose sensor in terms of a wide linear range of response along with good sensitivity achieved without the presence of any precious metals in the electrode.
Bimetallic glucose oxidation electrocatalysts were synthesized by two electrochemical reduction reactions carried out in series onto a titanium electrode. Nickel was deposited in the first synthesis stage followed by either silver or copper in the second stage to form Ag@Ni and Cu@Ni bimetallic structures. The chemical composition, crystal structure, and morphology of the resulting metal coating of the titanium electrode were investigated by X-ray diffraction, energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and electron microscopy. The electrocatalytic performance of the coated titanium electrodes toward glucose oxidation was probed using cyclic voltammetry and amperometry. It was found that the unique high surface area bimetallic structures have superior electrocatalytic activity compared to nickel alone. The resulting catalyst-coated titanium electrode served as a nonenzymatic glucose sensor with high sensitivity and low limit of detection for glucose. The Cu@Ni catalyst enables accurate measurement of glucose over the concentration range of 0.2–12 mM, which includes the full normal human blood glucose range, with the maximum level extending high enough to encompass warning levels for prediabetic and diabetic conditions. The sensors were also found to perform well in the presence of several chemical compounds found in human blood known to interfere with nonenzymatic sensors.
Experimental Procedure Materials Tris(hydroxymethyl)aminomethane (>99.8%), AgNO 3 (>99.0%), NaCl, NaOH, NaNO 3 , NiSO 4 ·6H 2 O (>99.0%), CuSO 4 ·5H 2 O (>99.0%), NH 4 Cl (99.9%), and dextrose were purchased from Sigma-Aldrich. Ethanol (200 Proof) was obtained from Koptec. Titanium (Ti, 0.89 mm thick) and platinum (Pt, 0.127 mm thick) were obtained from Alfa Aesar. All chemicals were used as received. Deionized water was used throughout the experiments. Synthesis of Coatings The Ti plate (8 × 8 × 0.89 mm 3 ) was cleaned by ultrasonication using detergent water and ethanol consecutively, followed by rinsing with deionized water. During the two-stage electrochemical deposition of bimetallic thin films, a two-electrode system was used with a Pt plate as the anode and a Ti plate as the cathode. The two electrodes were submerged in the electrolyte solution and maintained at a fixed distance of 10 mm. An aqueous electrolyte solution for the electrodeposition of Ni nanoparticles was prepared by adding 50 mM NaCl, 50 mM tris(hydroxymethyl)aminomethane, 0.75 mM NiSO 4 ·6H 2 O, and 18.75 mM NH 4 Cl in 125 mL of water under continuous stirring, with pH adjusted to 7.3 by addition of HCl. The electrolyte solution was heated to 95 °C using an oil bath for 20 min prior deposition. A constant current of 62.5 mA/cm 2 was passed for a duration of 2, 4, 6, 8, and 12 min to deposit a uniform layer of Ni nanocrystals. After the deposition, the coating was washed with deionized water and dried in atmospheric air. The Ni-coated plate was used as the electrode for a second electrochemical reduction reaction to deposit either silver or Cu nanoparticles. For the second-stage of Ag deposition on the Ni-coated Ti plate, an aqueous electrolyte solution was prepared by adding 50 mM NaNO 3 , 50 mM tris(hydroxymethyl)aminomethane, 0.5 mM AgNO 3 , and 12.5 mM NH 4 Cl in 125 mL of water. The Ni-coated Ti plate and the Pt plate were used as the cathode and anode, respectively. The reaction was carried out at a constant current density of 62.5 mA/cm 2 at room temperature with constant stirring for 4 min. For the second-stage of Cu deposition on a Ni-coated Ti plate, an electrolyte solution was prepared by adding 50 mM NaCl, 50 mM tris(hydroxymethyl)aminomethane, 0.5 mM CuSO 4 , and 12.5 mM NH 4 Cl in 125 mL of water. A constant current of 62.5 mA/cm 2 was passed at room temperature under constant stirring for 4 min to deposit Cu nanoparticles on top of Ni crystals. At the end of the electrodeposition process, the composite coatings were thoroughly rinsed with deionized water and dried at room temperature. Ag and Cu were found to form as separate nanoparticles rather than an alloy with Ni. The bimetallic films were named Ag@Ni and Cu@Ni, respectively. Scanning Electron Microscopy The surface composition and morphology of the coating were obtained using a Zeiss-Leo DSM982 scanning electron microscope. The microscope was equipped with a Phoenix energy dispersive X-ray photoelectron spectroscope, which was used in analyzing the composition of the constituent ions on the surface of the coatings. X-ray Diffraction The crystal structure of the coating was studied by using powder X-ray diffraction (XRD). The powdered coating samples were obtained by removing the nanoparticles from six identical samples prepared under the same conditions using ultrasonication in water followed by low-temperature heating to remove the water. The XRD data from the resulting powder samples were from a Philips model PW3020 diffractometer with Cu Kα radiation (λ = 1.5418 Å) measured in the range of 10–80°. X-ray Photoelectron Spectroscopy The surface composition of the coating was studied by using a Kratos AXIS Ultra DLD X-ray photoelectron spectrometer fitted with a monochromatic Al anode X-ray gun (Kα = 1486.6 eV) and a spectrum electron analyzer. The survey spectra were collected using a pass energy of 160 eV whereas high-resolution spectra were collected using a pass energy of 20 eV. All the spectra were fitted using CasaXPS software. For the X-ray power source, a mono Al filament was used with an emission current of 10 mA and anode HT at 15 kV. The C peak appeared at 285.00 eV at the survey spectra and was used as the internal reference for charging correction. Inductively Coupled Plasma Mass Spectroscopy Inductively coupled plasma mass spectroscopy (ICP-MS) was used to measure the concentration of Ni, Ag, and Cu ions that leached into the solution during the electrocatalytic detection of glucose molecules using the fabricated electrodes. The ICP-MS system used was a PerkinElmer Model NexION 2000 in STD and KED modes with 4.2 mL/min He flow. The reference material tested was Seronorm blood with accepted values of 980 ± 80, 9.2 ± 1.9, and 9.7 ± 0.4 ppb and measured values being 991.4, 8.62, and 9.35 ppb for Cu, Ni, and Ag measurement, respectively. The Ni-coated Ti plate, Ni–Ag-coated Ti plate, and Ni–Cu-coated Ti plate were placed in 0.1 M NaOH solution containing 1 mM glucose at room temperature for 25 cycles of cyclic voltammetry (CV) measurements from −0.6 to 1.1 V. The resulting glucose solutions in NaOH collected after CV experiments were sent for ICP-MS analysis. All the measurements were done in triplicate, and the mean was reported. Additionally, a control experiment was carried out using a Ti plate without a coating for comparison. Electrochemical Glucose Oxidation A conventional three-electrode system consisting of a platinum plate as the counter electrode, Ti plate coated with metal nanoparticles as the working electrode, and a Ag/AgCl reference electrode was used to characterize the electrochemical properties of the fabricated electrodes. The electrochemical analysis was studied using a homemade potentiostat/galvanostat. 21 CV experiments were conducted between −0.6 and 0.7 V for studying the electrocatalytic activities of the electrode materials and their electrokinetic properties in the presence or absence of glucose in 0.1 M NaOH. Electrochemical experiments were conducted at constant potential condition with successive addition of glucose stock solution to record the amperometric responses of the samples to an increasing glucose concentration. The calibration curves plotted by using the amperometric data were then used to calculate the limit of detection, sensitivity, and linear range of response for glucose for all three sample electrodes. The current density mentioned in the calibration curves was calculated using the geometric surface area and not the electrochemically active surface area (ECSA). Selectivity was similarly tested by successively adding glucose along with other potential interferants in a 0.1 M NaOH solution at constant potential condition. Reproducibility was tested by measuring the peak current in the presence of 1 mM glucose for each type of sample. Stability of the investigated electrode materials was tested by conducting 25 CV cycles in 0.1 M NaOH in the presence of glucose molecules followed by doing ICP-MS measurements of the electrolyte solution to determine if any metal ions leached out into the solution during the extended CV runs.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c10167 . XPS spectra, TEM images, EDX analysis, CV curves, and raw amperometry data used for characterizing sensor performance ( PDF ) Supplementary Material The authors declare no competing financial interest. Acknowledgments This work of R.G. was supported by the University of Rochester. M.Z.Y. was supported by the University of Rochester and by the Department of Energy National Nuclear Security Administration under award number DE-NA0003856. This report was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the U.S. Government or any agency thereof.
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 20; 16(1):17-29
oa_package/50/5e/PMC10788829.tar.gz
PMC10788830
38115194
Introduction Water vapor condensation is ubiquitous in nature and everyday life. 1 , 2 It plays an important role in a variety of applications involving heat and mass transfer, 3 − 5 e.g., for water harvesting, 6 − 8 water desalination, power generation, and thermal management. Most heat transfer devices are manufactured from metals with high thermal conductivity, e.g., ∼398 W·m –1 ·K –1 for copper. However, the metals are hydrophilic and easily wetted by condensation from steam, leading to a stable liquid film covering the surface. 9 , 10 During this so-called “filmwise condensation” mode, the liquid film hinders heat transfer because of its significant thermal resistance. By applying a low-adhesion or hydrophobic polymer coating on the metal surface, 11 − 13 the condensate can nucleate, grow, coalesce, and easily slide away from the surface in the form of distinct droplets. This condensation mode is called “dropwise condensation”. 14 It can show a performance enhancement of up to 1 order of magnitude compared to filmwise condensation, thanks to the periodic condensate removal, which leaves an accessible surface for fresh droplet nucleation. 4 , 15 On the other hand, these polymeric coatings usually have very low thermal conductivities on the order of 0.1–0.5 W·m –1 ·K –1 . 16 A thick polymer coating increases the thermal resistance, leading to an inefficient heat transfer process. For example, the state-of-the-art coatings, e.g., superhydrophobic surfaces, 17 − 20 and lubricant-infused surfaces, 21 − 25 usually have a large thickness ranging from micrometer to millimeter ( Figure 1 a, Tables S1 and S2 , Supporting Information). It can lead to a significantly high thermal resistance of more than 10 –5 m 2 ·K·W – 1 on a copper substrate, 26 − 28 compromising the heat transfer benefits from the dropwise condensation mode. To reduce the thermal resistance, ultrathin polymer brushes (ideally at the nanoscale level), such as polydimethylsiloxane (PDMS) brushes, 29 − 36 can be grafted onto the metal substrate. The coatings are ultrathin (∼6 nm) with low thermal resistance (<10 –7 m 2 ·K·W – 1 , Figure S1 , Supporting Information) and able to repel water drops with low contact angle hysteresis (<10°). Achieving a small coating thickness usually comes at the cost of compromised robustness. Despite the nanoscale thickness, PDMS brushes are promising alternative materials compared to superhydrophobic and lubricant-infused surfaces due to the absence of micro- or nanoscale rough surface topography, which typically is prone to damage, 37 as well as the good adhesion to the substrate due to strong covalent grafting. On the contrary, for superhydrophobic surfaces, the superhydrophobicity relies on vapor cushions within the micro/nanostructures (Cassie state). 38 − 40 At elevated supersaturation, impalement of the micro/nanostructures by water will occur (Wenzel state); 41 thus, the surface loses its superhydrophobicity, leading to filmwise condensation ( Figure 1 b). 42 , 43 For lubricant-infused surfaces, although studies have shown their excellent liquid repellency, and heat transfer coefficient up to 5 times higher compared to filmwise condensation, 26 they still face the issue of gradual lubricant depletion in the long term ( Figure 1 b). 44 Another problem in condensation applications is the contamination on the surface, e.g., biofouling, which can be a major issue that limits the heat transfer efficiency in industrial applications, e.g., condenser tubes, 45 , 46 and heat exchangers. 47 Microorganisms, such as bacteria, can attach to the surface of the condenser fins and tubes, acting as defects, 48 and continue expanding to form a fouling layer that can affect the heat transfer. This fouling layer can reduce the efficiency of the condenser by acting as an insulator, increasing the resistance to heat transfer and obstructing the departure of water. Although this problem is more relevant to the cooling side (the internal part of condenser tubes), it is not rare that the external part of the condenser tubes faces the problem of contamination. For example, in atmospheric water harvesting applications, 49 , 50 dust or microorganisms may attach to the surface during environmental exposure. Under ambient conditions, bacteria can easily grow and form a fouling layer. Therefore, antifouling is a very desirable property of coatings for heat transfer applications. Finally, the green chemistry of the polymeric coating material is also essential. With continuous use, coating degradation is inevitable, and biopersistent elements can be released into the environment, especially in processes that involve open systems. Specifically, hydrophobic surfaces sustaining dropwise condensation are usually made with long-chain perfluorinated polymers, which are not environmentally friendly, and their byproducts during degradation tend to bioaccumulate. 51 Here, we study the condensation of water on PDMS brushes ( Figure 1 c) under harsh experimental conditions. Because of their strong covalent bond with the substrate ( Figure S2 , Supporting Information) and the absence of rough surface microfeatures, PDMS brushes are stable even at challenging, high subcooling values and steam pressures. We experimentally demonstrate the coating resilience with an accelerated endurance test characterized by exposure to superheated steam at 111 °C and 1.42 bar with a shear velocity of 3 m·s –1 . Under the aforementioned conditions, the PDMS brushes can sustain dropwise condensation for at least 8 h and show a heat transfer coefficient that is 5 times greater than that of filmwise condensation. We also show that the PDMS brushes can effectively repel bacteria such as Escherichia coli and Staphylococcus aureus , reducing the attachment by 99%. With all these merits, PDMS brushes are promising to open a new avenue to enhance practical heat transfer performance by sustainable and effective means.
Results and Discussion Preparation and Characterization of PDMS Brushes PDMS brushes were prepared by a drop-casting and annealing method as described by Krumpfer and McCarthy for silicon wafers ( Figure 2 a and Methods). 52 − 56 Briefly, a PDMS liquid drop (molecular weight of 11,740 g·mol –1 ) was deposited onto an oxygen-plasma-activated copper surface, followed by heating. The grafting process of PDMS brushes onto the surface is initiated by siloxane hydrolysis. Then, the silanol-terminated chain can be covalently bonded onto the hydroxyl group on the copper ( Figure S3 , Supporting Information). 29 , 57 , 58 The resulting PDMS coating on copper is smooth with root-mean-square roughness of 3.6 ± 0.5 nm in an area of 500 × 500 nm 2 ( Figure 2 c). This value is mainly related to the roughness of the pristine copper substrate (3.0 nm ±0.7 nm). The cost of the coating is estimated to be less than 10 USD per m 2 ( Table S3 , Supporting Information). The coated surface is hydrophobic with a water advancing contact angle of 111° ± 1° and a contact angle hysteresis of 10° ± 4° ( Figure 2 b). PDMS brushes could also be easily applied on a variety of materials, such as silicon and aluminum, leading to similarly improved wetting properties: water advancing contact angle and contact angle hysteresis on silicon and aluminum substrates are 108° ± 1°, 8° ± 1°, and 110° ± 1°, 12° ± 1°, respectively. In addition, PDMS brushes can be applied on curved surfaces. As shown in Figure 2 d and Video S1 (Supporting Information), a water drop slides off on a PDMS-coated cylinder copper surface (diameter: 24 mm) within 1 s. The thickness and grafting density of PDMS brushes were analyzed by atomic force microscopy (AFM) force spectroscopy ( Figure S4 , Supporting Information), giving a coating thickness of d = 6 ± 1 nm. It has been reported recently that lubricant thickness could be optimized for an efficient condensation process on lubricant-infused surfaces utilizing drop coarsening due to merging by lateral capillary forces. 59 Such a mechanism cannot work for the much thinner PDMS brush coatings, but their orders of magnitude smaller thickness leads to a negligible thermal resistance, which is even more beneficial for heat transfer. Following the equation Γ = ( d ρ N A )/ M w , the grafting density Γ of our PDMS brushes was estimated as 0.3 ± 0.05 chains·nm –2 , where ρ and N A represent mass density and Avogadro constant, respectively. 60 The water condensation dynamics on PDMS brushes are first studied at the microscale, in situ, using an environmental scanning electron microscope (ESEM, FEI Quanta 650 FEG). As shown in Figure 2 e, the droplets maintain spherical cap shapes. While growing, the droplets easily coalesce without visible contact line pinning, suggesting the excellent droplet mobility and water repellency of PDMS brushes for tiny condensing droplets. When considering superhydrophobic surfaces, it can be challenging to repel tiny droplets because the superhydrophobicity relies on the empty space within structures. 19 , 61 , 62 At high subcooling values, if the droplet size is comparable to or smaller than the size of the surface features, droplets may stay pinned inside these features, and this may cause coalescence at early growth stages with other droplets and consequently lead to the formation of filmwise condensation. 39 − 43 , 63 As a primary durability test for the PDMS brushes, we used drop sliding measurement in a custom-built device, as reported before. 64 A needle connected with a peristaltic pump generated a series of water drops (each 45 μL). The stage was tilted at 50°, and a high-speed camera (FASTCAM MINI UX100, see Methods for details) was attached to the stage to capture videos. After continuously sliding thousands of water drops over the surface, PDMS brushes still exhibit good hydrophobicity ( Videos S2 – S5 , Supporting Information). The velocity of drops 1 and 5000 was 0.08 m·s –1 ± 0.02 m·s –1 and 0.10 m·s –1 ± 0.02 m·s –1 , respectively ( Figure S5 , Supporting Information). The corresponding dynamic advancing contact angle and contact angle hysteresis for drop 1 and drop 5000 were 130° ± 3°, 66° ± 7°, and 126° ± 3°, 56° ± 6°, respectively. The larger value of dynamic contact angle hysteresis compared to the static contact angle hysteresis shown in Figure 2 b is attributed to the substantially higher drop velocity. 65 Condensation Heat Transfer Performance at Low Pressure The condensation heat transfer performance of PDMS brushes was tested with a custom-built condensation chamber under low saturation vapor pressure (30 mbar, steam temperature 24 °C) ( Figure S6 , Supporting Information, and methods for details). 5 , 26 These conditions are comparable to industrial condensers’ operation parameters. The steam was generated from an electric boiler and flowed horizontally across the sample, where the flow speed was ∼4.6 m·s –1 . With increased subcooling, the dropwise condensation mode on PDMS brushes was maintained ( Figure 3 a, Videos S6 – S8 , Supporting Information), without a change of the circular drop shape. On our superhydrophobic reference surface (see Experimental Section for details), 66 dropwise condensation was observed at subcooling <1 K. Superhydrophobic surfaces are known for their jumping dropwise condensation mode only at low subcooling. 17 , 67 However, the drops show an irregular shape at subcooling of 2 K and finally turn into a liquid film at 3 K. The filmwise condensation is due to the flooding at high subcooling values because the surface remains superhydrophobic after condensation upon drying. Filmwise condensation on the superhydrophobic surface still allows higher heat transfer compared to that on the conventional hydrophilic surface. The reason is the difference in the wetting situation. On the conventional hydrophilic surface, a thick liquid film is formed, leading to large thermal resistance. It can be recognized from the bulge formed at the bottom of the hydrophilic surface ( Figure 3 a and Video S6 ). On the superhydrophobic surface, filmwise condensation leads to flooding of the surface structure, which then acts as a wicking layer. Therefore, film thickness is reduced to the order of the structure thickness of the superhydrophobic surface layer, and no bulge is visible at the lower end. This leads to a lower thermal resistance of the superhydrophobic layer during filmwise condensation ( Figure 3 a and Video S7 ). The different condensation modes for these three surfaces highlight the ability of PDMS brushes to sustain dropwise condensation over a wide range of subcooling values. This stability of dropwise condensation is also reflected in the trend of the heat transfer coefficient as a function of subcooling ( Figure 3 b). Although the superhydrophobic surface shows better heat transfer performance than PDMS brushes at low subcooling, the performance on the superhydrophobic surface decreases and approaches that for filmwise condensation around a subcooling of 2–3 K. Figure 3 c plots the heat fluxes of the three surfaces. At subcooling of 3K, PDMS brushes exhibit a heat flux of 233 kW·m –2 ·K –1 , which is 20% higher than that of the superhydrophobic surface. It should be noted that we cannot measure the heat transfer coefficient at higher subcooling (>3 K) for these better-performing surfaces due to their efficiency. Nevertheless, it is reasonable to predict that PDMS brushes can still maintain dropwise condensation at higher subcooling values or condensation rates, due to the absence of micro and nanostructures that can eventually get flooded with water. Condensation Heat Transfer Performance at High Pressure To quantitatively evaluate condensation heat transfer performance in harsh conditions, the PDMS brush surface was tested in a high-pressure flow chamber where the steam pressure and temperature were 1.42 bar and 111 °C, respectively ( Figure S7 , Supporting Information). 5 The experiment is conducted in a flow condensation environment, while the steam flowed vertically with velocities of 3 or 9 m·s –1 . As shown in Figure 4 a,b, PDMS brushes exhibited a higher heat transfer coefficient at both steam velocities. At 3 m·s –1 , the average heat transfer coefficient reached 125 kW·m –2 ·K –1 , which is ∼5 times higher than that on a bare CuO reference surface (filmwise condensation). Due to enhanced advection, the heat transfer performance was better on both surfaces at a steam flow rate of 9 m·s –1 . On PDMS brushes, the heat transfer coefficient reaches 233 kW·m –2 ·K –1 , which is ∼7 times higher than that on the CuO surface. To test the coating durability under condensation, we focused on the PDMS brushes with a steam flow rate of 3 m·s –1 for an extended period (∼8 h). The heat transfer coefficient and corresponding subcooling were continuously measured over 8 h, while condensation rates were recorded at several intervals ( Figure 4 c and Video S9 , Supporting Information). In the first 0.5 h, the system had to stabilize. The heat transfer coefficient increased and oscillated initially. After ≈0.5 h, the experimental conditions were stable, and the surface exhibited a heat transfer coefficient of ∼121 kW·m –2 ·K –1 . Up until 7 h, dropwise condensation remained the dominant mode. Afterward, an increase in the departure droplet sizes was observed, and localized filmwise condensation islands appeared (∼30% area shows filmwise condensation). However, the heat transfer coefficient remained as high as 103 kW·m –2 ·K –1 at 8.8 h, which is still more than 4 times higher than that of filmwise condensation. Such accelerated durability test proves that the ultrathin PDMS brushes sustain efficient dropwise condensation in harsh conditions for ∼8 h, showing its potential for practical applications where the conditions are much milder. 68 The filmwise condensation, in the end, may be related to the oxidation process of copper, which degrades the wettability of PDMS coating. 67 , 69 It should be noted that even after degradation, the wetting properties of the coating can be restored by applying a bit of PDMS oil ( Figure S8 , Supporting Information). Moreover, as a perspective of future impacts, although PDMS brush coating is grafted on the flat substrate here, it may also be used to modify the structured surface to further enhance condensation. 70 Antifouling Test We further measured the antifouling property by immersing the substrate in solutions containing Escherichia coli ( E. coli ) and Staphylococcus aureus ( S. aureus ), respectively. Both E. coli and S. aureus are commonly found bacteria. E. coli is Gram-negative and rod-shaped, while S. aureus is Gram-positive and spherically shaped. After 1 day of culture at 37 °C, the samples are taken out and washed gently to remove the floating bacteria. Scanning electron microscopy (SEM) images showed a significantly reduced bacterial number on PFDTS and PDMS surfaces when compared to those on the uncoated surface ( Figures 5 a and S9 , Supporting Information). Specifically, the attached E. coli number density was (2.7 ± 0.3) × 10 11 m –2 on Si surfaces, (1.5 ± 1.1) × 10 9 m –2 on PFDTS surfaces, and (2.6 ± 1.0) × 10 9 m –2 on PDMS surfaces ( Figure 5 b). For S. aureus , the number density on the three surfaces were (9.1 ± 4.9) × 10 11 m –2 , (1.1 ± 0.8) × 10 10 m –2 , and (1.0 ± 0.6) × 10 10 m –2 , respectively. The calculated antibacterial efficiency (i.e., the ratio of reduced bacterial amount on the surface to the total amount on the Si surface) reached ∼99% on both PFDTS and PDMS surfaces, showing the comparable antifouling property of the PDMS surface to the conventional fluorinated PFDTS surface. A quick anticontamination test showed that the PDMS brushes can effectively repel adhesive materials such as chalk powder and chili sauce, revealing its self-cleaning properties ( Figure S10 and Videos S10 – S11 , Supporting Information).
Results and Discussion Preparation and Characterization of PDMS Brushes PDMS brushes were prepared by a drop-casting and annealing method as described by Krumpfer and McCarthy for silicon wafers ( Figure 2 a and Methods). 52 − 56 Briefly, a PDMS liquid drop (molecular weight of 11,740 g·mol –1 ) was deposited onto an oxygen-plasma-activated copper surface, followed by heating. The grafting process of PDMS brushes onto the surface is initiated by siloxane hydrolysis. Then, the silanol-terminated chain can be covalently bonded onto the hydroxyl group on the copper ( Figure S3 , Supporting Information). 29 , 57 , 58 The resulting PDMS coating on copper is smooth with root-mean-square roughness of 3.6 ± 0.5 nm in an area of 500 × 500 nm 2 ( Figure 2 c). This value is mainly related to the roughness of the pristine copper substrate (3.0 nm ±0.7 nm). The cost of the coating is estimated to be less than 10 USD per m 2 ( Table S3 , Supporting Information). The coated surface is hydrophobic with a water advancing contact angle of 111° ± 1° and a contact angle hysteresis of 10° ± 4° ( Figure 2 b). PDMS brushes could also be easily applied on a variety of materials, such as silicon and aluminum, leading to similarly improved wetting properties: water advancing contact angle and contact angle hysteresis on silicon and aluminum substrates are 108° ± 1°, 8° ± 1°, and 110° ± 1°, 12° ± 1°, respectively. In addition, PDMS brushes can be applied on curved surfaces. As shown in Figure 2 d and Video S1 (Supporting Information), a water drop slides off on a PDMS-coated cylinder copper surface (diameter: 24 mm) within 1 s. The thickness and grafting density of PDMS brushes were analyzed by atomic force microscopy (AFM) force spectroscopy ( Figure S4 , Supporting Information), giving a coating thickness of d = 6 ± 1 nm. It has been reported recently that lubricant thickness could be optimized for an efficient condensation process on lubricant-infused surfaces utilizing drop coarsening due to merging by lateral capillary forces. 59 Such a mechanism cannot work for the much thinner PDMS brush coatings, but their orders of magnitude smaller thickness leads to a negligible thermal resistance, which is even more beneficial for heat transfer. Following the equation Γ = ( d ρ N A )/ M w , the grafting density Γ of our PDMS brushes was estimated as 0.3 ± 0.05 chains·nm –2 , where ρ and N A represent mass density and Avogadro constant, respectively. 60 The water condensation dynamics on PDMS brushes are first studied at the microscale, in situ, using an environmental scanning electron microscope (ESEM, FEI Quanta 650 FEG). As shown in Figure 2 e, the droplets maintain spherical cap shapes. While growing, the droplets easily coalesce without visible contact line pinning, suggesting the excellent droplet mobility and water repellency of PDMS brushes for tiny condensing droplets. When considering superhydrophobic surfaces, it can be challenging to repel tiny droplets because the superhydrophobicity relies on the empty space within structures. 19 , 61 , 62 At high subcooling values, if the droplet size is comparable to or smaller than the size of the surface features, droplets may stay pinned inside these features, and this may cause coalescence at early growth stages with other droplets and consequently lead to the formation of filmwise condensation. 39 − 43 , 63 As a primary durability test for the PDMS brushes, we used drop sliding measurement in a custom-built device, as reported before. 64 A needle connected with a peristaltic pump generated a series of water drops (each 45 μL). The stage was tilted at 50°, and a high-speed camera (FASTCAM MINI UX100, see Methods for details) was attached to the stage to capture videos. After continuously sliding thousands of water drops over the surface, PDMS brushes still exhibit good hydrophobicity ( Videos S2 – S5 , Supporting Information). The velocity of drops 1 and 5000 was 0.08 m·s –1 ± 0.02 m·s –1 and 0.10 m·s –1 ± 0.02 m·s –1 , respectively ( Figure S5 , Supporting Information). The corresponding dynamic advancing contact angle and contact angle hysteresis for drop 1 and drop 5000 were 130° ± 3°, 66° ± 7°, and 126° ± 3°, 56° ± 6°, respectively. The larger value of dynamic contact angle hysteresis compared to the static contact angle hysteresis shown in Figure 2 b is attributed to the substantially higher drop velocity. 65 Condensation Heat Transfer Performance at Low Pressure The condensation heat transfer performance of PDMS brushes was tested with a custom-built condensation chamber under low saturation vapor pressure (30 mbar, steam temperature 24 °C) ( Figure S6 , Supporting Information, and methods for details). 5 , 26 These conditions are comparable to industrial condensers’ operation parameters. The steam was generated from an electric boiler and flowed horizontally across the sample, where the flow speed was ∼4.6 m·s –1 . With increased subcooling, the dropwise condensation mode on PDMS brushes was maintained ( Figure 3 a, Videos S6 – S8 , Supporting Information), without a change of the circular drop shape. On our superhydrophobic reference surface (see Experimental Section for details), 66 dropwise condensation was observed at subcooling <1 K. Superhydrophobic surfaces are known for their jumping dropwise condensation mode only at low subcooling. 17 , 67 However, the drops show an irregular shape at subcooling of 2 K and finally turn into a liquid film at 3 K. The filmwise condensation is due to the flooding at high subcooling values because the surface remains superhydrophobic after condensation upon drying. Filmwise condensation on the superhydrophobic surface still allows higher heat transfer compared to that on the conventional hydrophilic surface. The reason is the difference in the wetting situation. On the conventional hydrophilic surface, a thick liquid film is formed, leading to large thermal resistance. It can be recognized from the bulge formed at the bottom of the hydrophilic surface ( Figure 3 a and Video S6 ). On the superhydrophobic surface, filmwise condensation leads to flooding of the surface structure, which then acts as a wicking layer. Therefore, film thickness is reduced to the order of the structure thickness of the superhydrophobic surface layer, and no bulge is visible at the lower end. This leads to a lower thermal resistance of the superhydrophobic layer during filmwise condensation ( Figure 3 a and Video S7 ). The different condensation modes for these three surfaces highlight the ability of PDMS brushes to sustain dropwise condensation over a wide range of subcooling values. This stability of dropwise condensation is also reflected in the trend of the heat transfer coefficient as a function of subcooling ( Figure 3 b). Although the superhydrophobic surface shows better heat transfer performance than PDMS brushes at low subcooling, the performance on the superhydrophobic surface decreases and approaches that for filmwise condensation around a subcooling of 2–3 K. Figure 3 c plots the heat fluxes of the three surfaces. At subcooling of 3K, PDMS brushes exhibit a heat flux of 233 kW·m –2 ·K –1 , which is 20% higher than that of the superhydrophobic surface. It should be noted that we cannot measure the heat transfer coefficient at higher subcooling (>3 K) for these better-performing surfaces due to their efficiency. Nevertheless, it is reasonable to predict that PDMS brushes can still maintain dropwise condensation at higher subcooling values or condensation rates, due to the absence of micro and nanostructures that can eventually get flooded with water. Condensation Heat Transfer Performance at High Pressure To quantitatively evaluate condensation heat transfer performance in harsh conditions, the PDMS brush surface was tested in a high-pressure flow chamber where the steam pressure and temperature were 1.42 bar and 111 °C, respectively ( Figure S7 , Supporting Information). 5 The experiment is conducted in a flow condensation environment, while the steam flowed vertically with velocities of 3 or 9 m·s –1 . As shown in Figure 4 a,b, PDMS brushes exhibited a higher heat transfer coefficient at both steam velocities. At 3 m·s –1 , the average heat transfer coefficient reached 125 kW·m –2 ·K –1 , which is ∼5 times higher than that on a bare CuO reference surface (filmwise condensation). Due to enhanced advection, the heat transfer performance was better on both surfaces at a steam flow rate of 9 m·s –1 . On PDMS brushes, the heat transfer coefficient reaches 233 kW·m –2 ·K –1 , which is ∼7 times higher than that on the CuO surface. To test the coating durability under condensation, we focused on the PDMS brushes with a steam flow rate of 3 m·s –1 for an extended period (∼8 h). The heat transfer coefficient and corresponding subcooling were continuously measured over 8 h, while condensation rates were recorded at several intervals ( Figure 4 c and Video S9 , Supporting Information). In the first 0.5 h, the system had to stabilize. The heat transfer coefficient increased and oscillated initially. After ≈0.5 h, the experimental conditions were stable, and the surface exhibited a heat transfer coefficient of ∼121 kW·m –2 ·K –1 . Up until 7 h, dropwise condensation remained the dominant mode. Afterward, an increase in the departure droplet sizes was observed, and localized filmwise condensation islands appeared (∼30% area shows filmwise condensation). However, the heat transfer coefficient remained as high as 103 kW·m –2 ·K –1 at 8.8 h, which is still more than 4 times higher than that of filmwise condensation. Such accelerated durability test proves that the ultrathin PDMS brushes sustain efficient dropwise condensation in harsh conditions for ∼8 h, showing its potential for practical applications where the conditions are much milder. 68 The filmwise condensation, in the end, may be related to the oxidation process of copper, which degrades the wettability of PDMS coating. 67 , 69 It should be noted that even after degradation, the wetting properties of the coating can be restored by applying a bit of PDMS oil ( Figure S8 , Supporting Information). Moreover, as a perspective of future impacts, although PDMS brush coating is grafted on the flat substrate here, it may also be used to modify the structured surface to further enhance condensation. 70 Antifouling Test We further measured the antifouling property by immersing the substrate in solutions containing Escherichia coli ( E. coli ) and Staphylococcus aureus ( S. aureus ), respectively. Both E. coli and S. aureus are commonly found bacteria. E. coli is Gram-negative and rod-shaped, while S. aureus is Gram-positive and spherically shaped. After 1 day of culture at 37 °C, the samples are taken out and washed gently to remove the floating bacteria. Scanning electron microscopy (SEM) images showed a significantly reduced bacterial number on PFDTS and PDMS surfaces when compared to those on the uncoated surface ( Figures 5 a and S9 , Supporting Information). Specifically, the attached E. coli number density was (2.7 ± 0.3) × 10 11 m –2 on Si surfaces, (1.5 ± 1.1) × 10 9 m –2 on PFDTS surfaces, and (2.6 ± 1.0) × 10 9 m –2 on PDMS surfaces ( Figure 5 b). For S. aureus , the number density on the three surfaces were (9.1 ± 4.9) × 10 11 m –2 , (1.1 ± 0.8) × 10 10 m –2 , and (1.0 ± 0.6) × 10 10 m –2 , respectively. The calculated antibacterial efficiency (i.e., the ratio of reduced bacterial amount on the surface to the total amount on the Si surface) reached ∼99% on both PFDTS and PDMS surfaces, showing the comparable antifouling property of the PDMS surface to the conventional fluorinated PFDTS surface. A quick anticontamination test showed that the PDMS brushes can effectively repel adhesive materials such as chalk powder and chili sauce, revealing its self-cleaning properties ( Figure S10 and Videos S10 – S11 , Supporting Information).
Conclusions We demonstrate that the low-cost, flat, antifouling, and nonfluorinated PDMS brush coating can sustain high-performing dropwise condensation at extreme conditions, e.g., high subcooling value, and high steam shear flow and temperature. The PDMS brushes consist of siloxane polymer chains, where one side is covalently grafted onto the substrate and the other side is free and flexible. The coating is thin (thickness of 6 nm) and water-repellent (advancing contact angle of ∼110° and contact angle hysteresis of ∼10° on copper). The experimental results show that PDMS brushes on copper substrate exhibit dropwise condensation and ∼3–7 times higher heat transfer coefficients compared to that of filmwise condensation formation on pristine copper substrates in the low (30 mbar) and high (1.4 bar) pressure chambers. The PDMS brushes also exhibit excellent durability in high-pressure environments, which is confirmed by the 8-h condensation test under harsh conditions of 1.4 bar steam pressure and 3 m·s –1 steam velocity. Additionally, PDMS brushes can effectively repel 99% of bacteria, e.g., E. coli and S. aureus . Therefore, PDMS brushes are promising candidates for developing a low-cost, environment-friendly, and effective coating for condensation heat transfer applications.
Heat exchangers are made of metals because of their high heat conductivity and mechanical stability. Metal surfaces are inherently hydrophilic, leading to inefficient filmwise condensation. It is still a challenge to coat these metal surfaces with a durable, robust, and thin hydrophobic layer, which is required for efficient dropwise condensation. Here, we report the nonstructured and ultrathin (∼6 nm) polydimethylsiloxane (PDMS) brushes on copper that sustain high-performing dropwise condensation in high supersaturation. Due to the flexible hydrophobic siloxane polymer chains, the coating has low resistance to drop sliding and excellent chemical stability. The PDMS brushes can sustain dropwise condensation for up to ∼8 h during exposure to 111 °C saturated steam flowing at 3 m·s –1 , with a 5–7 times higher heat transfer coefficient compared to filmwise condensation. The surface is self-cleaning and can reduce the level of bacterial attachment by 99%. This low-cost, facile, fluorine-free, and scalable method is suitable for a great variety of heat transfer applications.
Experimental Section Surface Preparation PDMS brushes are prepared by drop-casting. First, the substrates (silicon, aluminum, or copper) were washed in acetone, 2-propanol, and deionized water with ultrasonication for 10 min, respectively. Then, they are treated with an oxygen plasma (Diener Electronic Femto, 120 W, 6 cm 3 ·min –1 oxygen flow rate) for 5 min. Afterward, several drops of PDMS (polydimethylsiloxane, 100 cSt, Thermo Scientific) are applied on the substrate, which is then covered by spontaneous wetting. After full spreading, the substrates were put in the oven at 100 °C for 24 h and washed with acetone afterward to remove any unbound residue. This preparation method is repeated twice. PFDTS surfaces are prepared via chemical vapor deposition in a vacuum desiccator. Twenty μL of 1 H ,1 H ,2 H ,2 H -perfluorodecyltrimethoxysilane is added before the desiccator is vacuumed below 20 mbar. The reaction lasts for 4 h. Hydrophilic CuO surfaces are prepared by immersing oxide-free pristine copper in boiling water for 30 min. The superhydrophobic surfaces are fabricated as described before. 66 Briefly, after the cleaning process, the substrates are immersed into a 9.25% V/V aqueous solution of hydrochloric acid for 10 min to fabricate microstructures. Then they are immersed in boiling water for 5 min to fabricate nanostructures on top of microstructures through the boehmitage process. Finally, the substrates are coated with a thin hydrophobic film through C 4 F 8 plasma deposition. Surface Characterization Advancing and receding contact angles are measured using a goniometer (OCA35, Dataphysics). The volume of sessile water was gradually (1 μL·s –1 ) increased from 5 to 20 μL and decreased back to 5 μL. The contact angles were determined by fitting an ellipse to the contour images. The surface morphology was measured using Dimension Icon AFM (Bruker) in tapping mode. Reflective aluminum Si cantilevers (OLTESPA-R3) with a spring constant of ∼2 N·m –1 were used. The thickness of the brush layer was measured using an AFM instrument (JPK Nanowizard 4) in force mapping mode. Force–distance curves were recorded with a grid of 16 × 16 points on an area of 1 × 1 μm 2 . For the observation of condensation using an environmental scanning electron microscope (ESEM) (Quanta 650 FEG, FEI), the samples were placed on a custom-made copper platform, which was cooled with a recirculating chiller and maintained at ∼2 °C. The drop velocity and dynamic contact angles on the surface were determined by analyzing videos of drop sliding via a MATLAB program (DSAfM). The videos were recorded using a high-speed camera (FASTCAM MINI UX100, Photron with a Titan TL telecentric lens, 0.268x, 1 in. C-Mount, Edmund Optics) at a frame rate of 500 FPS. Briefly, the edge position of the drops was detected, after the drop images were corrected by subtracting the background from the original images and tilting according to the background image. The drop velocities were calculated by the displacement from each frame. Dynamic advancing and receding contact angles were determined by applying a fourth-order polynomial fit to the drop contour in each image. Condensation Heat Transfer Measurements We used two custom-made experimental setups for the condensation tests similar to our recent work. 5 , 26 The chambers are evacuated first before the introduction of steam. An electric boiler was used to generate steam from deionized water. Condensation tests were performed with saturated steam at a pressure of 1.42 bar or 30 mbar. The samples were mounted on a copper block. Several temperature sensors inside the copper block were used to determine the condensation heat flux ( q ′′) through the surface by following the equation q ′′ = k c A c / A e ·d T/ d x . Here, k c is the thermal conductivity of the copper cooler, A c is the cross-sectional area of the cooler, A e is the area of the exposed condensing surface, and d T/ d x is the constant thermal gradient along the array. d T/ d x was computed from a least-squares linear fit of the temperatures measured with the temperature sensor array. In the low-pressure (30 mbar) chamber, the surface temperature was measured by two temperature sensors attached to the surface. Videos were recorded with a DSLR (D7500, Nikon) and a macro lens (AF Micro-Nikkor 200 mm f/4D IF-ED, Nikon). In the high-pressure chamber (1.42 bar), the surface temperature was estimated by using a thermocouple placed inside the substrate. The videos were recorded by using a high-speed camera (FASTCAM Mini UX100, 2000FPS) and the same lens. More details are in Figures S6 and S7 , Supporting Information. Antifouling Tests To test the antifouling property, the samples (1 × 1 cm 2 ) are first sterilized by UV light (366 nm) for 15 min. Then, the samples are placed in a sterile 24-well plate, and each well includes 2 mL of the bacteria test suspension (refer to Figure S11 in the Supporting Information for the preparation of bacterial suspension). The samples are incubated for 1 day at 37 °C before the medium is removed from the samples and gently washed 3 times with 1 mL saline solution (0.85% NaCI in Milli-Q water). Afterward, the bacteria is fixed by 1 mL glutaraldehyde (Sigma-Aldrich, 2.5% (v/v) in the saline solution) for 30 min at room temperature. Subsequently, the coatings are gently washed 3 times with the saline solution and dehydrated with a series of ethanol (30, 40, 50, 60, 70, 80, 90, 95, and 99.89%, 15 min each, last step twice). Lastly, the samples are dried under vacuum at room temperature overnight prior to SEM imaging. For bacterial number counting, more than 30 images are taken at random positions by scanning electron microscopy (SEM, LEO 1530 Gemini, Zeiss).
Data Availability Statement The authors declare that the data supporting the findings of this study are available within the paper and its Supporting Information or from the corresponding author upon reasonable request. Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c17293 . Schematic of the thermal resistance network of PDMS brushes; bond dissociation energies; schematic showing the bonding process of PDMS brushes on the copper; water contact angles on PDMS brushes; estimation of the grafting density of PDMS brushes; water drop sliding velocity and contact angles on PDMS brushes coated copper plate; device for condensation heat transfer measurements at 30 mbar and 1.4 bar; durability test of PDMS-coated copper in hot water (∼100 °C) and its recovery; antifouling property of different surfaces; photographs showing the self-cleaning property of PDMS brushes; Schematic of bacterial suspension preparation; Thickness of the current state-of-the-art superhydrophobic surfaces; thickness of the current state-of-the-art lubricant infused surfaces; and cost estimation of PDMS brushes on a copper plate ( PDF ) Water drop slide off a PDMS-coated copper tube. Tilt angle: 25° ( AVI ) Water drop sliding on PDMS brushes coated copper plate: 1st drop ( AVI ) Water drop sliding on PDMS brushes coated copper plate: 100th drop ( AVI ) Water drop sliding on PDMS brushes coated copper plate: 1000th drop ( AVI ) Water drop sliding on PDMS brushes coated copper plate: 5000th drop ( AVI ) Left: Steam condensation on a hydrophilic copper oxide surface. Right: real-time heat transfer coefficient and subcooling. Steam pressure: 30 mbar. Playback: 240× ( MP4 ) Left: Steam condensation on a superhydrophobic aluminium surface. Right: real-time heat transfer coefficient and subcooling. Steam pressure: 30 mbar. Playback: 240× ( MP4 ) Left: Steam condensation on PDMS brushes coated copper surface. Right: real-time heat transfer coefficient and subcooling. Steam pressure: 30 mbar. Playback: 240× ( MP4 ) Steam condensation on PDMS brushes coated copper surface. Steam pressure: 1.4 bar. Playback: 0.1× ( MP4 ) Cleaning process of PDMS brushes coated copper plate being contaminated with chalk powder ( AVI ) Cleaning process of PDMS brushes coated copper plate being contaminated with chili sauce ( AVI ) Supplementary Material Author Present Address ⊥ Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States Author Contributions S.L., A.M., T.P., D.P., H.-J.B., and M.K. designed the research and experiments. S.L. and A.M. prepared the surface. S.L. carried out the experiments and characterization unless otherwise stated below. C.W.E.L. conducted the ESEM measurements. C.W.E.L., M.D., and K.R. conducted the condensation heat transfer measurements. P.S. and E.G. prepared the superhydrophobic surface for reference. S.L. and E.Y. conducted the antifouling measurements. S.L., C.W.E.L., M.D., K.R., E.Y., P.S., E.G., A.M., D.P., M.K., and H.-J.B. wrote the manuscript. All authors have given approval to the final version of the manuscript. Open access funded by Max Planck Society. The authors declare no competing financial interest. Acknowledgments This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (HARMoNIC, grant number 801229; Advanced grant DyanMo, No 883631). Shuai Li thanks the China Scholarship Council (CSC) for the financial support. Till Pfeiffer acknowledges the financial support by the German Research Foundation - Project number 265191195—SFB 1194; subproject C03.
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 20; 16(1):1941-1949
oa_package/5f/45/PMC10788830.tar.gz
PMC10788831
38109219
Introduction Thermal and electrochemical catalysis are key to sustainability. 1 However, industrial catalysts are complex materials systems operating under harsh conditions. The active parts of an industrial catalyst are nanoparticles that expose a variety of facets with different surface orientations at which the catalytic reactions occur. These nanoparticle facets are nearly impossible to study under the conditions in which the catalysts are commonly used. Instead, operando measurements have mainly been performed on more simple low-index single-crystal surfaces. Typically, the idea is to use the results of such measurements to model individual nanoparticle facets, which enables us to piece together the full behavior of a nanoparticle, given that the surface orientations it exhibits are known. This approach has been a cornerstone of fundamental catalysis research in the past decades. 2 , 3 With CO oxidation into CO 2 being used as a model reaction, many operando studies of CO oxidation have been conducted on the low-index (100), (110), and (111) surfaces of transition metals. These have been described extensively in terms of both their oxide formation properties and their catalytic activities. In addition to this, studies have been performed on high-index stepped or kinked surfaces. Such studies have proven a clear correlation between the surface orientation and the catalytic properties of said surface. 4 − 10 However, most of these studies have focused on only one or a rather limited subset out of all possible surface orientations. This approach requires many consecutive experiments and many catalytic samples to fully probe the surface orientation space. Furthermore, the use of multiple samples makes it very difficult to ensure consistent gas conditions throughout the experiments, as each sample will change the gas conditions depending on its activity; 11 a more active sample will inevitably have more product gas and less reactant gas in its gas boundary layer. In the past decade, polished polycrystals have been proposed as a solution to this problem. These crystals, which consist of a large number of randomly oriented grains, will, when polished, exhibit a wide variety of surface orientations. Therefore, an increasing number of studies have been performed in which two-dimensional (2D) techniques are used to study polycrystalline samples. 12 − 15 With appropriate 2D resolution, these polycrystals now act as a collection of a vast number of separate single crystals, all within one sample. 16 In this way, multiple surface orientations can be measured simultaneously, which is much more efficient and also makes it easier to compare results, as we can expect similar experimental conditions between neighboring grains. The size of the grains and thus the number of orientations per surface area can also be tuned by changing the manufacturing process of the polycrystals. Studies with polycrystals are often conducted by first mapping the orientations of grains within a region of interest (ROI) on the sample using electron backscatter diffraction (EBSD). The same ROI is then investigated using 2D-capable techniques such as photoemission electron microscopy (PEEM) 17 or scanning photoelectron microscopy (SPEM). 13 A number of individual grains in the ROI are then chosen, and the properties of these grains are examined in detail. Unfortunately, in these studies, the grains are treated more as individual data points rather than as a quasi-continuous set of single-crystal surfaces covering a vast number of surface orientations. Another caveat is that the PEEM and SPEM techniques can also operate only under low-pressure conditions. 2D-surface optical reflectance (2D-SOR) as a technique to study catalysts was introduced by Onderwaater et al., 18 showing that a simple setup measuring the optical reflectance of a metal sample can be used to obtain information about the sample oxidation or roughness. The technique has also been used to study corrosion-related phenomena. 19 − 21 Further experiments have shown that even very thin oxides with a thickness of only a few nanometers can be detected. 22 We have since further developed this method and have also used it in combination with high-energy surface X-ray diffraction (HESXRD), where we correlate changes in the surface reflectance with changes in the surface oxide thickness and roughness on a single crystal Pd(100) sample. 14 , 23 , 24 It turns out that the 2D-SOR signal is sensitive enough to detect the formation of a 2–3 Å thick surface oxide 25 on Pd(100). Especially when combined with other operando techniques, such as mass spectroscopy (MS) or PMIRRAS, 26 2D-SOR can help correlate changes in surface structure with changes in chemical activity. Another advantage of 2D-SOR is its high time resolution, which is primarily limited by the camera used to image the reflectance rather than by the sample itself. This allows for repetition rates in the order of the speed of the gas diffusion over the sample under atmospheric conditions. 11 Furthermore, 2D-SOR can operate at any pressure, as opposed to the electron-based 2D experimental techniques mentioned above. In this work, we combine EBSD and 2D-SOR to characterize a polycrystalline sample surface in an operando study. Previously we reported on the potential of this approach. 14 In this article, we exploit previous progress to explore a much larger data set and report on variations in the thickness of PdO formation on different surface orientations. However, instead of selecting a number of grains in the ROI and treating them individually, we treated the grains as a massive collection of data points. This approach provides new information under operation conditions of a multifaceted catalyst at work, which has not been performed previously and creates further challenges in how to efficiently present the data. In ref ( 13 ) the concepts of the step edge parameter (SEP) and the step density parameter (SDP) are devised, which describe the surface in two simple variables. We chose to plot our data against both of these variables. Furthermore, we visualize the data in a way that does away with the traditional spatial representation of the polycrystal by plotting the reflectivity as a function of the surface orientation using the so-called inverse pole figure (IPF), which is another representation of the surface orientations commonly used in crystallography. 27 This way of visualizing data is very useful to draw conclusions regarding how the surface orientation affects surface reactivity and has been used previously to visualize data as a function of the surface orientation. 28 , 29 It also shows the strength of the 2D-SOR technique in quickly obtaining large amounts of 2D-surface information in operando catalysis experiments. As a proof of concept, we conducted an operando experiment under near-ambient pressure conditions in which we track the reflectivity of the grains of a Pd-polycrystal performing CO oxidation in an oxygen-rich environment. We then link the reflectivity changes to the surface oxidation. Even though we only present data from this one experiment, the setup and principle presented herein can easily be adapted to other samples and reaction environments, such as the solid–liquid interface in electrochemical experiments. 30 , 31
Results and Discussion Experimental Results To demonstrate the approach of combining a 2D-capable technique with an orientation mapped polycrystal, we performed an experiment on CO oxidation under oxygen-rich gas conditions. The sample was heated in a mixture of 40% O 2 , 4% CO, and 56% Ar at a pressure of 150 mbar and a total flow of 100 mL/min. The sample surface reflectivity was monitored using a 2D-SOR microscope at an image acquisition rate of 50 Hz. The sample temperature was gradually increased from room temperature to around 450 °C. This section will give a short overview of the results, which will then be discussed in more depth in the Discussion section. To begin with, we examine the reflectivity of six representative grains as highlighted in Figure 4 . Figure 5 shows the partial pressure of CO 2 and the sample temperature during the experiment as well as the reflectivity trends of the highlighted grains. As the sample is heated, the CO 2 production increases exponentially until the reaction reaches the mass-transfer limit (MTL) at around 200 °C. This event is also known as the catalytic ignition. 37 The small increase in the level of CO 2 at 280 s is attributed to carbon desorbing from the heater. It should be noted at this point that this means that all grains ignite more or less simultaneously—more on that later. This article focuses on the three time ranges indicated in Figure 5 , the first of which is the aforementioned ignition. The other two, denoted a and b, are with the sample in a highly active state. The reflectivity map in each of these ranges has been normalized with an image of the clean metallic sample. The change in the reflectivity of the sample surface at the ignition is shown in Figure 6 . Panel a shows the development of the reflectivity of the grains highlighted in Figure 4 during the catalytic ignition. Here we observe a very small decrease in the reflectance of around 0.3% for some grains. Panel b shows the reflectivity of the sample in the ROI, and panel c shows the same data, but plotted in the IPF. We observe that in particular the grains close to the (111) and (100) orientations exhibit a clear drop in reflectance. The changes in reflectivity later in time, in regions a and b, are shown in Figure 7 . Here, the surface is in the MTL regime, while the temperature has been increasing. Panels a and b show the reflectivity in regions a and b, respectively. We observe a significant decrease in the surface reflectivity across most grains except those close to the (111) and (110) orientations. This increases as the sample is further heated. Note the difference in color scale between panels a and b. In the Supporting Information , a video showing the oxidation of the sample during the entire experiment is provided. By calculating the SDP and SEP for every pixel in the image, based on the EBSD data, we can also plot the reflectance data in region a as a function of the SDP and SEP. This is shown in Figure 8 . Discussion Comparing the results in this study to existing literature is not an easy task due to the very large number of surfaces available. In this discussion we will focus on the behavior of six representative grains with orientations close to the (100), (110), (111), (553), (522), and (210) orientations as shown in Figure 4 . First, we can conclude that the surface orientations on the left edge of the IPF have A-type steps, whereas those on the right edge have B-type steps. All areas in between will have a mixture of both types of steps, with different ratios depending on the position in the IPF. Most studies on catalytic CO oxidation on Pd agree that as the sample is heated, it “ignites” whereby it transitions from an inactive stage of CO poisoning where the surface is covered by CO to a stage where the sample is active and mainly covered in adsorbed oxygen or Pd oxides. 38 The ignition temperature depends not only on the surface structure but also on the gas conditions and is further affected by the coupling between the gas and the surface. 11 , 39 At this point, the activity also reaches the MTL, where it is limited by the diffusion of the reactant gases to the surface. As the sample is heated further, bulk oxides may form, which in the case of Pd may also be catalytically active. 40 The following discussion will be split into two parts, discussing the thin oxide formation at the ignition and the subsequent thicker oxide separately. Ignition We begin by comparing the results of this study with the literature concerning low-index Pd surfaces, which have been studied extensively. These can be found in the corners of the IPF. Starting with the red (100) surface, we see that the reflectivity of the surfaces toward the (100) orientation decreases around 0.25% at the catalytic ignition as shown in panel a. This is attributed to the formation of the well-known 2–3 Å thick ( √ 5× √ 5) R 27° surface oxide which has been extensively described in the literature and has been found to form during both simple oxidation studies and during CO oxidation. 41 − 46 The decrease in the reflectivity matches the expected oxide thickness. Moving on to the blue (111) surface, which has been studied during CO oxidation at near-ambient pressures by Toyoshima et al. 47 and in UHV by Zhang et al. 48 In both studies, the surface is shown to form a Pd 5 O 4 surface oxide. 49 In our measurement, we see that the surfaces very close to the (111) surface lose around 0.3% reflectivity at the ignition which we attribute to the formation of this surface oxide. In contrast, the green Pd(110) surface, which has been less studied, remains the brightest surface throughout the ignition. This suggests that no surface oxide is formed and that the reaction proceeds via the Langmuir–Hinshelwood mechanism. This lack of a surface oxide is consistent with the results of Westerström et al. 50 The discussion of the surface oxide formation on the high-index Pd surfaces is more complex. We begin by looking at the (553) and (522) surfaces. At the ignition, the cyan (553) surface loses 0.25% reflectivity while the purple (522) loses around 0.1%, suggesting differences in surface oxide formation. The (553) surface is known to exhibit a complex behavior where the surface facets exhibit various length (111) terraces connected by (110) steps, resulting in (111) and (332) surface orientations. This is further complicated by the matter that the faceting seems to be different in pure oxidation and in CO oxidation. 51 , 52 Looking at Figure 6 c, we also see that the surfaces along the edge between the (111) and (110) directions vary in reflectivity in a rather sporadic fashion, further confirming the complexity of the oxygen-induced faceting of vicinal Pd surfaces. Some of the orientations exhibited by the refaceting may cause surface oxide formation, while others do not. The (112) surface, which is close to the (522) orientation, has also been shown to exhibit faceting. 53 For the range of surfaces between the (100) and (110) surface orientations with B-type steps, we observe a gradually decreasing amount of surface oxides that coincides with the inability of the (110) surface to form a surface oxide. 50 Bulk Oxide Formation As the sample is heated further, some grains exhibit a significant reduction in reflectivity, which we attribute to the formation of a thicker bulk oxide, as shown in Figure 7 . We can again compare the results of this measurement with the literature, starting with low-index surfaces. As the sample temperature is increased further, the reflectivity of the surfaces close to (100) begins to reduce significantly, which is attributed to the formation of a thicker bulk oxide. This has also been described in the literature. 41 , 42 , 45 In a study by Goodwin et al. performed at similar gas conditions, this oxide was found to be around 70 Å thick, 54 which is within the same order of magnitude as suggested by our measurement. Moving on, we see that the grains close to the (110) orientation remain bright, suggesting that no bulk oxide is formed. This is consistent with the findings of Toyoshima et al., who studied Pd(110) during CO oxidation 55 at 1 mbar. They found that the surface primarily remains covered in chemisorbed oxygen during the reaction with only small amounts of bulk oxide being formed even at high O 2 :CO ratios. The surfaces close to (111) behave similarly, losing very little reflectance. This indicates that very little bulk oxide is formed after surface oxide formation, which is consistent with the results by Toyoshima et al. where only very small amounts of bulk oxide were observed. 56 Moving on to bulk oxide formation on the high-index surfaces, we see that this is more consistent than surface oxide formation, with the entire B-edge between the (111) and (110) orientations remaining brighter, suggesting that it oxidizes considerably less than the rest of the sample surface. Furthermore, it is apparent that there are different regions in the IPF that seem to behave similarly. For example, there is a very abrupt jump in reflectance between the (100) and (110) orientations. This could be attributed to a similar refaceting process occurring throughout each region, where longer terraces are connected by larger steps. It is noteworthy that the darkest area is around the (210) orientation, where either significant refaceting occurs or significant amounts of bulk is formed. To our knowledge, no surfaces close to this orientation have been previously studied, making it difficult to attribute this effect to a particular surface behavior. Step Density Parameter and Step Edge Parameter We have also plotted the change in reflectivity of the grains in region a as a function of the SDP and SEP as introduced by Winkler et al., 13 as shown in Figure 8 . Panel a shows the reflectivity of the grains as a function of the SDP, whereas panel b shows the reflectivity as a function of the SEP. We conclude that there is a linear correlation between the SDP and the sample reflectivity. The outliers are the grains close to the (110) orientation colored green. Concerning the SEP, we see no clear correlation. In an attempt to explain this, we remind ourselves that the EBSD technique can determine only the bulk orientation of the crystal; the surface orientation indicated by EBSD then assumes a perfect cut of the bulk with no restructuring or faceting. This perfectly cut surface is then used to calculate the SEP and SDP. We speculate that difference in correlation between the SDP and SEP is because the SEP is more sensitive to surface restructuring as more complex edge structures are known to reconstruct into series of straight edges. 50 , 57 This is illustrated in Figure 9 . Thus, the actual SEP of the surface differs more to that indicated by EBSD than the actual SDP. Nonetheless, this data suggests that surfaces with higher step density oxidize more, even with a large sample size of probed surfaces. Catalytic Activity A property of a catalyst that is perhaps more interesting than oxide formation is the catalytic activity itself, which we were unable to measure in this experiment. The question then is, does a correlation between oxide formation at the ignition and catalytic activity exist? Note that we are dealing with two separate phenomena here—the desorption of CO which prevents the catalytic reaction from occurring and the subsequent oxidation of the surface. In this work, we measure only the latter. This means that we do not necessarily expect a correlation between oxidation and activity. However, we could make the hypothesis that increased activity should result in more oxidizing CO-depleted gas conditions, which in turn results in increased oxide formation. The activity of curved Pd crystals, which cover the region from the (553) orientation via (111) to the (322) orientation, has been investigated previously. 7 , 8 , 58 Blomberg et al. show that the side with B-type steps becomes active before that with the A-type steps, which ignites before the (111) surface with no steps. This suggests that the (111) surface, which forms the thickest oxide layer ( Figure 6 c), is the least active. In the study by Vogel et al. where the low index surfaces were compared in their activity by using PEEM on a polycrystalline sample, this is also the case. 59 They find the ignition is in the order (110)–(100)–(111), which in our case has the order thinnest to thickest oxide (brightest to darkest). Thus, perhaps unsurprisingly, there is no clear correlation between activity and oxide formation at the ignition. Gas Conditions Another advantage of using polycrystals to perform surface science experiments is that using samples with grains that are small compared to the gas diffusion speed is one of few ways to really ensure all grains are exposed to nearly identical gas conditions throughout the experiment. Although the gases fed into the reactor can be accurately controlled using MFCs even when using single crystals, it is nearly impossible to correct for the fact that the active catalyst changes the gas environment.
Results and Discussion Experimental Results To demonstrate the approach of combining a 2D-capable technique with an orientation mapped polycrystal, we performed an experiment on CO oxidation under oxygen-rich gas conditions. The sample was heated in a mixture of 40% O 2 , 4% CO, and 56% Ar at a pressure of 150 mbar and a total flow of 100 mL/min. The sample surface reflectivity was monitored using a 2D-SOR microscope at an image acquisition rate of 50 Hz. The sample temperature was gradually increased from room temperature to around 450 °C. This section will give a short overview of the results, which will then be discussed in more depth in the Discussion section. To begin with, we examine the reflectivity of six representative grains as highlighted in Figure 4 . Figure 5 shows the partial pressure of CO 2 and the sample temperature during the experiment as well as the reflectivity trends of the highlighted grains. As the sample is heated, the CO 2 production increases exponentially until the reaction reaches the mass-transfer limit (MTL) at around 200 °C. This event is also known as the catalytic ignition. 37 The small increase in the level of CO 2 at 280 s is attributed to carbon desorbing from the heater. It should be noted at this point that this means that all grains ignite more or less simultaneously—more on that later. This article focuses on the three time ranges indicated in Figure 5 , the first of which is the aforementioned ignition. The other two, denoted a and b, are with the sample in a highly active state. The reflectivity map in each of these ranges has been normalized with an image of the clean metallic sample. The change in the reflectivity of the sample surface at the ignition is shown in Figure 6 . Panel a shows the development of the reflectivity of the grains highlighted in Figure 4 during the catalytic ignition. Here we observe a very small decrease in the reflectance of around 0.3% for some grains. Panel b shows the reflectivity of the sample in the ROI, and panel c shows the same data, but plotted in the IPF. We observe that in particular the grains close to the (111) and (100) orientations exhibit a clear drop in reflectance. The changes in reflectivity later in time, in regions a and b, are shown in Figure 7 . Here, the surface is in the MTL regime, while the temperature has been increasing. Panels a and b show the reflectivity in regions a and b, respectively. We observe a significant decrease in the surface reflectivity across most grains except those close to the (111) and (110) orientations. This increases as the sample is further heated. Note the difference in color scale between panels a and b. In the Supporting Information , a video showing the oxidation of the sample during the entire experiment is provided. By calculating the SDP and SEP for every pixel in the image, based on the EBSD data, we can also plot the reflectance data in region a as a function of the SDP and SEP. This is shown in Figure 8 . Discussion Comparing the results in this study to existing literature is not an easy task due to the very large number of surfaces available. In this discussion we will focus on the behavior of six representative grains with orientations close to the (100), (110), (111), (553), (522), and (210) orientations as shown in Figure 4 . First, we can conclude that the surface orientations on the left edge of the IPF have A-type steps, whereas those on the right edge have B-type steps. All areas in between will have a mixture of both types of steps, with different ratios depending on the position in the IPF. Most studies on catalytic CO oxidation on Pd agree that as the sample is heated, it “ignites” whereby it transitions from an inactive stage of CO poisoning where the surface is covered by CO to a stage where the sample is active and mainly covered in adsorbed oxygen or Pd oxides. 38 The ignition temperature depends not only on the surface structure but also on the gas conditions and is further affected by the coupling between the gas and the surface. 11 , 39 At this point, the activity also reaches the MTL, where it is limited by the diffusion of the reactant gases to the surface. As the sample is heated further, bulk oxides may form, which in the case of Pd may also be catalytically active. 40 The following discussion will be split into two parts, discussing the thin oxide formation at the ignition and the subsequent thicker oxide separately. Ignition We begin by comparing the results of this study with the literature concerning low-index Pd surfaces, which have been studied extensively. These can be found in the corners of the IPF. Starting with the red (100) surface, we see that the reflectivity of the surfaces toward the (100) orientation decreases around 0.25% at the catalytic ignition as shown in panel a. This is attributed to the formation of the well-known 2–3 Å thick ( √ 5× √ 5) R 27° surface oxide which has been extensively described in the literature and has been found to form during both simple oxidation studies and during CO oxidation. 41 − 46 The decrease in the reflectivity matches the expected oxide thickness. Moving on to the blue (111) surface, which has been studied during CO oxidation at near-ambient pressures by Toyoshima et al. 47 and in UHV by Zhang et al. 48 In both studies, the surface is shown to form a Pd 5 O 4 surface oxide. 49 In our measurement, we see that the surfaces very close to the (111) surface lose around 0.3% reflectivity at the ignition which we attribute to the formation of this surface oxide. In contrast, the green Pd(110) surface, which has been less studied, remains the brightest surface throughout the ignition. This suggests that no surface oxide is formed and that the reaction proceeds via the Langmuir–Hinshelwood mechanism. This lack of a surface oxide is consistent with the results of Westerström et al. 50 The discussion of the surface oxide formation on the high-index Pd surfaces is more complex. We begin by looking at the (553) and (522) surfaces. At the ignition, the cyan (553) surface loses 0.25% reflectivity while the purple (522) loses around 0.1%, suggesting differences in surface oxide formation. The (553) surface is known to exhibit a complex behavior where the surface facets exhibit various length (111) terraces connected by (110) steps, resulting in (111) and (332) surface orientations. This is further complicated by the matter that the faceting seems to be different in pure oxidation and in CO oxidation. 51 , 52 Looking at Figure 6 c, we also see that the surfaces along the edge between the (111) and (110) directions vary in reflectivity in a rather sporadic fashion, further confirming the complexity of the oxygen-induced faceting of vicinal Pd surfaces. Some of the orientations exhibited by the refaceting may cause surface oxide formation, while others do not. The (112) surface, which is close to the (522) orientation, has also been shown to exhibit faceting. 53 For the range of surfaces between the (100) and (110) surface orientations with B-type steps, we observe a gradually decreasing amount of surface oxides that coincides with the inability of the (110) surface to form a surface oxide. 50 Bulk Oxide Formation As the sample is heated further, some grains exhibit a significant reduction in reflectivity, which we attribute to the formation of a thicker bulk oxide, as shown in Figure 7 . We can again compare the results of this measurement with the literature, starting with low-index surfaces. As the sample temperature is increased further, the reflectivity of the surfaces close to (100) begins to reduce significantly, which is attributed to the formation of a thicker bulk oxide. This has also been described in the literature. 41 , 42 , 45 In a study by Goodwin et al. performed at similar gas conditions, this oxide was found to be around 70 Å thick, 54 which is within the same order of magnitude as suggested by our measurement. Moving on, we see that the grains close to the (110) orientation remain bright, suggesting that no bulk oxide is formed. This is consistent with the findings of Toyoshima et al., who studied Pd(110) during CO oxidation 55 at 1 mbar. They found that the surface primarily remains covered in chemisorbed oxygen during the reaction with only small amounts of bulk oxide being formed even at high O 2 :CO ratios. The surfaces close to (111) behave similarly, losing very little reflectance. This indicates that very little bulk oxide is formed after surface oxide formation, which is consistent with the results by Toyoshima et al. where only very small amounts of bulk oxide were observed. 56 Moving on to bulk oxide formation on the high-index surfaces, we see that this is more consistent than surface oxide formation, with the entire B-edge between the (111) and (110) orientations remaining brighter, suggesting that it oxidizes considerably less than the rest of the sample surface. Furthermore, it is apparent that there are different regions in the IPF that seem to behave similarly. For example, there is a very abrupt jump in reflectance between the (100) and (110) orientations. This could be attributed to a similar refaceting process occurring throughout each region, where longer terraces are connected by larger steps. It is noteworthy that the darkest area is around the (210) orientation, where either significant refaceting occurs or significant amounts of bulk is formed. To our knowledge, no surfaces close to this orientation have been previously studied, making it difficult to attribute this effect to a particular surface behavior. Step Density Parameter and Step Edge Parameter We have also plotted the change in reflectivity of the grains in region a as a function of the SDP and SEP as introduced by Winkler et al., 13 as shown in Figure 8 . Panel a shows the reflectivity of the grains as a function of the SDP, whereas panel b shows the reflectivity as a function of the SEP. We conclude that there is a linear correlation between the SDP and the sample reflectivity. The outliers are the grains close to the (110) orientation colored green. Concerning the SEP, we see no clear correlation. In an attempt to explain this, we remind ourselves that the EBSD technique can determine only the bulk orientation of the crystal; the surface orientation indicated by EBSD then assumes a perfect cut of the bulk with no restructuring or faceting. This perfectly cut surface is then used to calculate the SEP and SDP. We speculate that difference in correlation between the SDP and SEP is because the SEP is more sensitive to surface restructuring as more complex edge structures are known to reconstruct into series of straight edges. 50 , 57 This is illustrated in Figure 9 . Thus, the actual SEP of the surface differs more to that indicated by EBSD than the actual SDP. Nonetheless, this data suggests that surfaces with higher step density oxidize more, even with a large sample size of probed surfaces. Catalytic Activity A property of a catalyst that is perhaps more interesting than oxide formation is the catalytic activity itself, which we were unable to measure in this experiment. The question then is, does a correlation between oxide formation at the ignition and catalytic activity exist? Note that we are dealing with two separate phenomena here—the desorption of CO which prevents the catalytic reaction from occurring and the subsequent oxidation of the surface. In this work, we measure only the latter. This means that we do not necessarily expect a correlation between oxidation and activity. However, we could make the hypothesis that increased activity should result in more oxidizing CO-depleted gas conditions, which in turn results in increased oxide formation. The activity of curved Pd crystals, which cover the region from the (553) orientation via (111) to the (322) orientation, has been investigated previously. 7 , 8 , 58 Blomberg et al. show that the side with B-type steps becomes active before that with the A-type steps, which ignites before the (111) surface with no steps. This suggests that the (111) surface, which forms the thickest oxide layer ( Figure 6 c), is the least active. In the study by Vogel et al. where the low index surfaces were compared in their activity by using PEEM on a polycrystalline sample, this is also the case. 59 They find the ignition is in the order (110)–(100)–(111), which in our case has the order thinnest to thickest oxide (brightest to darkest). Thus, perhaps unsurprisingly, there is no clear correlation between activity and oxide formation at the ignition. Gas Conditions Another advantage of using polycrystals to perform surface science experiments is that using samples with grains that are small compared to the gas diffusion speed is one of few ways to really ensure all grains are exposed to nearly identical gas conditions throughout the experiment. Although the gases fed into the reactor can be accurately controlled using MFCs even when using single crystals, it is nearly impossible to correct for the fact that the active catalyst changes the gas environment.
Conclusions In this work, we have demonstrated that the simple 2D-SOR technique can be used together with EBSD to map the reflectivity change and thus the oxidation of Pd as a function of surface orientation. While the reflectance data give less insight than direct measurements of the surface composition as done by XPS or HESXRD, the method presented in this work enables us to quickly measure a very large number of surface orientations in a single experiment and at high pressures. This helps us to bridge both the materials and pressure gaps in catalysis and allows us to identify new regions of the surface orientation space for further exploration. For example, the abrupt change in reflectivity when moving along the edge between the (100) and (110) orientations is notable, as is the fact that the little-studied (310) orientation is the darkest and thus most likely the most oxidized surface orientation. The application of the presented technique is also not limited to Pd. Any polycrystalline metal with the right grain size can be used as a sample. This means that a wide variety of metals can be probed to identify new surface orientations to study in more detail for use as potential catalysts. We thus think that performing reflectivity studies on transition metal polycrystals can have a large impact in bridging the materials gap in catalysis.
Industrial catalysts are complex materials systems operating in harsh environments. The active parts of the catalysts are nanoparticles that expose different facets with different surface orientations at which the catalytic reactions occur. However, these facets are close to impossible to study in detail under industrially relevant operating conditions. Instead, simpler model systems, such as single crystals with a well-defined surface orientation, have been successfully used to study gas–surface interactions such as adsorption and desorption, surface oxidation, and oxidation/reduction reactions. To more closely mimic the many facets exhibited by nanoparticles and thereby close the so-called materials gap, there has also been a recent move toward using polycrystalline surfaces and curved crystals. However, these studies are limited either by the pressure or spatial resolution at realistic pressures or by the number of surfaces studied simultaneously. In this work, we demonstrate the use of reflectance microscopy to study a vast number of catalytically active surfaces simultaneously under realistic and identical reaction conditions. As a proof of concept, we have conducted an operando experiment to study CO oxidation over a Pd polycrystal, where the polycrystalline surface acts as a collection of many single-crystal surfaces. Finally, we visualized the resulting data by plotting the reflectivity as a function of surface orientation. We think the techniques and visualization methods introduced in this work will be key toward bridging the materials gap in catalysis.
Experiment In this experiment, we used a hat-shaped Pd polycrystal with a bottom diameter of 8 mm, a top diameter of 6 mm, and a height of 2 mm purchased from SPL in Zaandam. The sample was polished (ra <0.03 μm) and had a specified purity of 99.994%. Before the measurements, the surface was cleaned by three cycles of Ar + sputtering and annealing at 1000 K. The sample was transferred through air between the sputtering and measurements. When choosing what polycrystal to use, the size of the grains is important. The size has to be large enough to obtain good per-grain statistics while small enough to minimize gas gradient effects within the region of interest (ROI) as it is desirable that all grains experience the same gas conditions. For more discussion on this, see ref ( 11 ). The sample used has grains of the order of 10–100 μm in size, which is small enough to assume that all grains experience the same overall gas conditions. The crystallographic orientations of the grains were characterized by electron backscatter diffraction (EBSD) using a scanning electron microscope (FEI Quanta 200 MKII) with an integrated camera (Hikari XP) and a TSL-OIM system from EDAX. In this way, we surveyed a ROI of 1.43 × 1.26 mm 2 , which will be the area of the sample used in this work as shown in Figure 1 a. The ROI was chosen to be approximately in the center of the sample, but the exact choice of the area to use was arbitrary. Figure 1 b shows the orientation map of the grains in the ROI of the sample. Here it should be made clear that EBSD is a bulk technique—the determined orientations and Miller indices of the surfaces assume a perfect cut of the bulk grains with no refaceting or restructuring. The 2D-SOR microscope setup shown in Figure 2 consisted of off-the-shelf parts. The main part of the microscope is a preassembled lens system (Navitar 12X Zoom Series) with an optical illumination port. At this port, a high-intensity red LED at 660 nm (Thorlabs M660L4) is attached, which acts as a light source. A diffuser lens placed between the LED and the beam splitter removed any patterns from the LED itself. The light was reflected off the sample and imaged using a 16-bit Andor Zyla camera. This provides a flexible, portable, and inexpensive system that can easily be mounted on any reactor with optical access. The setup is described in more detail in ref ( 14 ). To quantify the reflectance data, we refer to the Fresnel equations and roughness calculations as discussed thoroughly in ref ( 22 ). In the calculations, we use the optical constants for bulk Pd metal and bulk PdO. 32 , 33 Further assuming negligible surface roughness, we can find the oxide thickness from the loss in reflectivity compared with the reduced surface without any oxide. This is illustrated in Figure 3 . The experiment was performed in a 23 mL high-pressure flow reactor. Optical access to the sample was provided by 18 mm diameter windows on all sides. Sample heating is done with a Boralectric resistive heater, onto which the sample is placed. The temperature of the sample was monitored with a type D thermocouple connected to the heater. Calibration measurements map the temperature reported by the thermocouple to the real sample temperature as discussed in ref ( 34 ). The gas supply into the reactor is regulated with a series of mass flow controllers (Bronkhorst EL-FLOW), and a pressure controller (Bronkhorst EL-PRESS) is used to keep a constant pressure in the reactor. Using this system, we can reach flows between 10 and 500 mln/min at pressures between 10 mbar and 1 bar. Pressure gauges monitor the pressure before and after the reactor, which makes it possible to determine the reactor pressure through a calibration curve. A quadropole mass spectrometer (Pfeiffer QMP 220) into which a small amount of the exhaust gas was leaked through a leak valve was used to monitor the gas composition at the reactor outlet. More details on the reactor, the gas system, and its capabilities can be found in refs ( 24 , 34 , and 35 ). Data Visualization In addition to showing the reflectivity images themselves, we have chosen to present the results in two ways: First, we plot the value of the reflectivity of a grain in the IPF. Because we are working with Pd, which has a cubic lattice structure, there is a 6-fold rotational symmetry in the unit cell. Thus, each surface orientation is assigned both a unique color and a unique position in the IPF, as shown in Figure 1 b–d. 36 Now we can use other data, in this case reflectivity data, and replace the color of the corresponding grain in the IPF with the reflectivity data while keeping the position. In this way, we can summarize the entire data set into an easy-to-understand form where we can plot a parameter, in this case the reflectivity, as a function of the grain orientation. The second method to present the data is by plotting the reflectivity of each grain as a function of the SEP and SDP, which have been devised in the work by Winkler et al. to quantify surface orientation properties into two attributes. 13
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c11341 . Video showing the reflectance change of the Pd sample during the entire experiment ( Figure 5 ) ( MP4 ) Supplementary Material The authors declare no competing financial interest. Acknowledgments This project was financially supported by the Knut and Alice Wallenberg foundation (KAW) funded project “Atomistic design of new catalysts” (project no. KAW 2015.0058), the Swedish Research Council by the Röntgen-Ångström Cluster “In-situ High Energy X-ray Diffraction from Electrochemical Interfaces (HEXCHEM)” (project no. 2015-06092), the Swedish Research Council (project no. 2016-03501), the Swedish Foundation for Strategic Research (project no. ITM17-0045), the Åforsk Foundation, and the Crafoord Foundation. The authors express their gratitude for their financial support.
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 18; 16(1):444-453
oa_package/55/df/PMC10788831.tar.gz
PMC10788832
38117238
Introduction Organic–inorganic metal halide perovskite solar cells (PSCs) have received significant interest from the photovoltaic community due to their skyrocketing power conversion efficiencies (PCEs) from 3.8 to 26.1%, 1 to compete with established solar cell technologies such as crystalline silicon (c-Si) and copper indium gallium selenide (CIGS). 2 , 3 Moreover, PSCs may be scaled up using a low-cost solution process from widely available abundant precursors showing promise as a future mainstream photovoltaic (PV) technology. 4 , 5 PSCs can also be integrated as top cells into tandem solar cells when combined with existing mature PV technologies to increase efficiency beyond the Shockley–Queisser limit of single-junction devices. 6 , 7 However, besides impressive efficiency, the long-term stability of PSC devices under practical working conditions still requires further improvement to satisfy stringent market demands. A typical PSC consists of a perovskite light absorber sandwiched between an n-type electron-transporting layer (ETL) and a p-type hole-transporting layer (HTL). 8 Hole-transporting materials (HTMs) play critical roles in efficiently extracting and transporting photogenerated holes from the perovskite layer to the electrode, as well as suppressing charge recombination in PSCs. 9 , 10 In general, HTMs should possess the following properties: (1) appropriate energy-level alignment with perovskite materials to guarantee effective hole extraction and electron blocking; (2) high hole mobility; (3) good solubility in common solvents; (4) excellent film-forming ability; (5) good thermal, photochemical, air, and moisture stability; and (6) low cost. 11 However, the requirements for HTMs vary depending on the device configurations. 12 For n–i–p PSCs, since the HTM layer is fabricated on top of the perovskite layer, a thick HTM film is required to ensure full coverage of the rough perovskite surface and suppress the diffusion of metal from the top electrode into the perovskite. Also, the HTM film should ideally be hydrophobic to protect the perovskite from moisture ingress. Although various kinds of HTMs have been developed, 2,2′,7,7′-tetrakis( N , N -di- p -methoxyphenylamino)-9,9′-spirobifluorene ( Spiro-OMeTAD ) has been proven to be the most reliable and effective HTM for use in n–i–p PSCs. 13 , 14 Spiro-OMeTAD has a large bandgap (about 3.0 eV) and a relatively shallow highest occupied molecular orbital (HOMO) energy level of around −5.1 eV, 12 which provides good electronic alignment with perovskite materials. In addition, the synthesis and solution-based film processing of Spiro-OMeTAD are well established and are well suited to the fabrication of large-area solar cells. On the other hand, chemical dopants or additives, such as lithium bis(trifluoromethanesulfonyl)imide (LiTFSI), cobalt(III) complexes, and 4- tert -butylpyridine ( t BP), are needed to improve the conductivity and hole mobility of the pristine Spiro-OMeTAD . These hygroscopic dopants have an impact on the device’s long-term stability due to moisture ingress and ion migration. Therefore, an interlayer that is hydrophobic 15 and/or able to block ion migration 16 between the perovskite layer and HTM layer would be helpful to improve the stability of PSCs. In the case of p–i–n devices, solution-processing of the perovskite absorber layer puts additional constraints on the choice of HTMs, as the materials must now be made resistant to the perovskite precursor solution, commonly a mixture of polar dimethylformamide (DMF) and dimethyl sulfoxide (DMSO) solutions. So far, polymeric HTMs, such as poly(3,4-ethylenedioxythiophene):polystyrenesulfonate ( PEDOT:PSS ), 17 poly[3-(4-carboxylatebutyl)thiophene] ( P3CT ) derivatives, 18 , 19 and poly[bis(4-phenyl)(2,4,6-trimethylphenyl)amine] ( PTAA ) 20 , 21 or combinations thereof, are widely used for this application. 22 , 23 Among them, PTAA , with its excellent electrical properties and chemical neutrality, has attracted particular interest. 5 , 24 − 27 However, the strongly hydrophobic PTAA film surface results in the dewetting of the perovskite precursor solution and low-quality perovskite films. 28 Despite several attempts to modify PTAA , such as chemical doping, 29 , 30 surface post-treatment, 31 and interfacial functionalization, 32 the tedious synthetic process and batch-to-batch variation of PTAA remain significant issues restricting its application to large-scale device fabrication. In this regard, small molecular organic molecules offer potential advantages, such as a well-defined molecular weight, ease of synthesis, and good reproducibility. To insolubilize small molecular molecules, the use of molecules with anchoring groups such as phosphonic acid (−PO(OH) 2 ) or carboxylic acid (−COOH) that can spontaneously bind to the transparent conducting oxide surface to form a conformal hole-collecting monolayer has been demonstrated as an effective way by our groups and others. 33 − 36 An alternative approach is to polymerize the small molecules in situ via cross-linking reactions. Soluble small molecules bearing cross-linkable units, such as vinyl, acrylate, azide, and oxetane groups, can form insoluble cross-linked three-dimensional (3D) networks under thermal or ultraviolet (UV) treatment. 37 − 39 Such cross-linked 3D networks could enable solvent-resistant hole-transporting layers 40 − 46 and protective interlayers. 47 , 48 However, the reported cross-linkable systems would not be suitable for flexible p–i–n PSCs with film substrates or for n–i–p PSCs due to their high cross-linking temperatures (usually >180 °C), which exceed the tolerance of the underlying layers. In this work, we report the development of a 9,9′-spirobifluorene-based molecule functionalized with four vinyl groups ( V1382 ) for the targeted cross-linkable HTL ( Scheme 1 ) and its application to PSCs. To lower the cross-linking temperature, the introduction of an aliphatic cross-linker containing four thiol groups, pentaerythritol tetrakis(3-mercaptopropionate) (PETMP), has been reported. 49 We chose a dithiol-terminated diphenylsulfide, 4,4′-thiobisbenzenethiol, as a cross-linker since it has a shorter insulating part than PETMP and may generate a stable radical form to facilitate the thiol–ene “click” reaction with V1382 . We found that the cross-linking between V1382 and 4,4′-thiobisbenzenethiol (dithiol) can occur at a low temperature of 103 °C to form an insoluble 3D polymer network. To the best of our knowledge, this is the lowest cross-linking temperature reported for HTLs for PSCs. Benefiting from the mild cross-linking conditions, this cross-linkable system is suitable for applications in both p–i–n and n–i–p PSC architectures. Devices employing the cross-linked V1382 /dithiol as the hole-transporting layer in p–i–n PSCs and as the interlayer between the perovskite layer and Spiro-OMeTAD in n–i–p PSCs have shown improved performance and long-term stability compared with devices using conventional HTMs. These results demonstrate cross-linking as an efficient strategy for low-cost and high-performance organic semiconducting materials, not only for photovoltaics but also for other optoelectronic devices such as light-emitting diodes, phototransistors, photocells, and so on.
Materials and Methods Fabrication of p–i–n PSCs Preparation of Transparent Conductive Oxide Substrates Glass/FTO substrates (10 sq –1 , AGC, Inc.) were etched with zinc powder and HCl (6 M in deionized water) and consecutively cleaned with 15 min ultrasonic bath in water, acetone, detergent solution (Semico Clean 56, Furuuchi chemical), water, and isopropanol, followed by drying with an air gun, and finally plasma treatment. The substrates were transferred to an inert gas-filled glovebox for further processing. Preparation of Hole-Transporting Layers V1382 was mixed with 4,4′-thiobisbenzenethiol (molar ratio = 1:2, concentration of V1382 = 0.125–4 mg mL –1 ) in chlorobenzene. The HTM solution (100 μL) was deposited on the FTO substrate using spin-coating (3000 rpm for 30 s, 5 s acceleration), followed by heating on a hot plate at 110 °C for 1 h. In the case of bare V1382 , 8 mg mL –1 V1382 was used. The hole-collecting material PTAA (2.0 mg mL –1 in anhydrous toluene) was deposited by using spin-coating (4000 rpm for 30 s, 5 s acceleration), followed by heating on a hot plate at 100 °C for 10 min. Preparation of Perovskite Layer The Cs 0.05 FA 0.80 MA 0.15 PbI 2.75 Br 0.25 precursor solution was prepared from CsI (69 mg, 0.27 mmol), MABr (85 mg, 0.76 mmol), PbI 2 (2.24 g, 4.85 mmol), PbBr 2 (96 mg, 0.26 mmol), and FAI (703 mg, 4.09 mmol) dissolved in a mixture of DMF (3.0 mL) and DMSO (0.90 mL). After stirring at 40 °C for 30 min, the solution was filtered with a 0.45 μm PTFE filter. 190 μL of the solution was placed on an FTO/HTM substrate and spread by spin-coating (slope 1 s, 1000 rpm 10 s, slope 5 s, 6000 rpm 20 s, slope 1 s) to make a thin film. 300 μL of chlorobenzene was dripped over the rotating substrate at 3 s before the end of the spinning at 6000 rpm. The films were then annealed on a hot plate at 150 °C for 10 min. These perovskite samples were moved under Ar to a vacuum deposition chamber, where 0.5 nm of ethylenediammonium diiodide (EDAI 2 ) (deposition rate 0.03 nm s –1 ) was deposited by thermal evaporation. Preparation of Electron-Transporting Layer and Metal Electrode The above samples were moved under Ar to a vacuum deposition chamber, where 20 nm of C 60 (deposition rate 0.05 nm s –1 ) and 8 nm of BCP (deposition rate 0.01 nm s –1 ) were deposited by thermal evaporation. The top electrode was prepared by depositing 100 nm of silver (deposition rate, 0.005 nm s –1 ) through a shadow mask. Fabrication of n–i–p PSCs Preparation of Transparent Conductive Oxide Substrates Glass/ITO substrates (10 sq –1 ) were etched with zinc powder and HCl (6 M in deionized water) and consecutively cleaned with a 15 min ultrasonic bath in water, acetone, detergent solution (Semico Clean 56, Furuuchi chemical), water, and isopropanol, followed by drying with an air gun, and finally plasma treatment. The substrates were transferred to an inert gas-filled glovebox for further processing. Preparation of the SnO 2 Layer The SnO 2 layer was prepared by spin-coating a colloidal dispersion (15% in H 2 O) diluted with deionized water (volume ratio = 1:1) on the ITO substrates (400 μL for each substrate, slope 2 s, 3000 rpm 20 s, slope 2 s) followed by annealing at 150 °C for 30 min. A plasma treatment was performed after cooling the substrate to room temperature, before transferring the samples to an inert gas-filled glovebox for further processing. Preparation of Perovskite Layer The Cs 0.05 FA 0.80 MA 0.15 PbI 2.75 Br 0.25 precursor solution was prepared from CsI (69 mg, 0.27 mmol), MABr (85 mg, 0.76 mmol), PbI 2 (2.24 g, 4.85 mmol), PbBr 2 (96 mg, 0.26 mmol), and FAI (703 mg, 4.09 mmol) dissolved in a mixture of DMF (3.0 mL) and DMSO (0.90 mL). After stirring at 40 °C for 30 min, the solution was filtered with a 0.45 μm PTFE filter. 190 μL of the solution was placed on a glass/ITO/SnO 2 substrate and spread by spin-coating (slope 1 s, 1000 rpm 10 s, slope 5 s, 6000 rpm 20 s, slope 1 s) to make a thin film. 300 μL of chlorobenzene was dripped over the rotating substrate at 3 s before the end of the spinning at 6000 rpm. The films were then annealed on a hot plate at 150 °C for 10 min. Preparation of Cross-Linked V1382 Interlayer V1382 was mixed with 4,4′-thiobisbenzenethiol (molar ratio = 1:2, concentration of V1382 = 1.0, 2.0 mg mL –1 ) in chlorobenzene. 100 μL of the solution was spin-coated on top of the perovskite layer (3000 rpm for 30 s, 5 s acceleration), followed by heating on a hot plate at 110 °C for 1 h. Preparation of Hole-Transporting Layer Spiro-OMeTAD (0.06 M) was mixed with an oxidizing agent [tris(2-(1 H -pyrazol-1-yl)-4- tert- butylpyridine)cobalt(III) tris(bis(trifluoromethylsulfonyl)imide)] (FK209, 0.15 equiv) into a solution of chlorobenzene, 4- tert -butylpyridine ( t BP, 3.3 equiv), and lithium bis(trifluoromethylsulfonyl)imide (LITFSI, 0.54 equiv). After being stirred at 70 °C for 30 min, the suspension was filtered with a 0.45 μm PTFE filter to remove insoluble Co(II) complexes. 90 μL of the solution was spin-coated on top of V1382 (slope 4 s, 4000 rpm, 30 s, slope 4 s), followed by annealing at 70 °C for 30 min. Preparation of Metal Electrode Gold electrodes (80 nm) were thermally deposited on the top face of the devices by using a shadow mask.
Results and Discussion The polymer precursor V1382 , which possesses a 9,9′-spirobifluorene core and four vinyl cross-linkable groups, was synthesized in a facile 2-step synthetic procedure with commercially available starting materials as shown in Scheme 1 . During the first step, the palladium-catalyzed Buchwald–Hartwig amination reaction of 2,2′,7,7′-tetrabromo-9,9′-spirobifluorene and p -anisidine was carried out to give aminated precursor 1 in 70% yield. Compound 1 was then vinyl-functionalized by using 4-bromostyrene to generate the target product V1382 in 51% yield. Structures of the synthesized compounds were characterized by nuclear magnetic resonance (NMR), elemental analysis (EA), and mass spectrometry (MS). The total cost for V1382 is estimated to be 42 € g –1 , much cheaper than widely used HTMs, 50 indicating its strong potential for large-scale manufacturing processes ( Table S1 ). Detailed synthesis procedures and analysis data are given in the Supporting Information . The thermal properties of V1382 and its cross-linking reaction with 4,4′-thiobisbenzenethiol were investigated by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC). The decomposition temperature corresponding to 5% weight loss ( T dec ) of V1382 was estimated from the TGA curve to be 460 °C ( Figure S1 ), confirming that V1382 has good thermal stability. As shown in Figure 1 a, an exothermic peak was detected at 253 °C during the first scan, while no distinct phase transition could be observed until 350 °C in the second heating scan, suggesting that thermal cross-linking of V1382 occurs at 253 °C. In contrast, after mixing V1382 with a dithiol cross-linker, 4,4′-thiobisbenzenethiol, in a molar ratio of 1:2, the exothermic peak shifted to the region of 103–120 °C, and the cross-linking temperature ( T poly ) was detected at 107 °C ( Figure 1 b). The results imply that the fast thermal cross-linking occurs due to the facile thiol–ene “click” reaction. It is worth noting that this is the lowest cross-linking temperature reported in the PSC field, 40 − 52 enabling the application in both p–i–n and n–i–p PSC architectures. To evaluate the optical properties of V1382 and formed polymers, ultraviolet–visible (UV–vis) absorption and photoluminescence (PL) spectra were measured from tetrahydrofuran (THF) solutions and thin films. The results are shown in Figure S2 and summarized in Table 1 . The absorption maxima (λ abs ) of V1382 were observed at 336 and 395 nm. The less intense absorption peak at 336 nm can be assigned to the π–π* transition, while the more intense absorption peak at 395 nm corresponds to the n –π* transition. After polymerization of V1382 at 255 °C, wide absorption band ranging from 275 to 450 nm was observed, while after thermal cross-linking using dithiol at 103 °C, the absorption spectra of the polymer had two main peaks at around 303 and 383 nm. In addition, the emission maxima (λ em ) of V1382 were observed at 419 and 441 nm with a Stokes shift value of 24 nm. V1382 films with and without 4,4′-thiobisbenzenethiol cross-linker (molar ratio = 1:2) were prepared by spin-coating the corresponding materials in THF solutions ( V1382 20 mg mL –1 ). The ability to form insoluble cross-linked networks was evaluated by measuring the UV–vis absorption of these spin-coated films. The results are shown in Figure 2 a,b. After annealing the films of V1382 without and with the dithiol cross-linker for only 15 min at 255 and 103 °C, respectively, and rinsing with THF several times to remove soluble parts, absorbance from the films was still detected. It indicates that the cross-linking of these films occurred under these conditions, resulting in good solvent-resistant films. We note that such rapid cross-linking is quite unusual for thiol–ene-type polymerization according to our previously reported works, 42 , 44 suggesting that spiro configuration might be sterically or energetically favorable for this type of reaction. A longer time frame was used to quantitatively cross-link the films. The cross-linking process in both cases was completed after annealing for 60 min. Fourier transform infrared (FTIR) spectra ( Figure 2 c) were recorded to ascertain the occurrence of the cross-linking. After V1382 cross-linking with dithiol at 103 °C, the peak of S–H stretching vibration at 2520 cm –1 and the peak of C=C stretching vibration at 1625 cm –1 disappeared compared with the peaks before heating confirming that fast thermal cross-linking occurs after heating V1382 with a dithiol cross-linker at 103 °C. The hole-transport properties of the HTMs were characterized with the aid of xenographic time-of-flight (XTOF) measurements ( Figure 3 a). At zero field strength, V1382 demonstrates a hole-drift mobility of 8.7 × 10 –5 cm 2 V –1 s –1 . After thermal annealing, regardless of using the dithiol cross-linker, the hole mobilities of cross-linked films slightly reduce to 1.3 × 10 –5 cm 2 V –1 s –1 , yet are still comparable to those of popular HTMs for PSCs. 53 , 54 In addition, the solid-state ionization potential ( I p ) of V1382 and the cross-linked films were measured by using photoelectron spectroscopy in air (PESA). As shown in Figure 3 b, the ionization potential of the V1382 film was measured to be 5.29 eV. The I p values slightly increase to 5.38 and 5.35 eV in the case of cross-linked V1382 without and with 4,4′-thiobisbenzenethiol, respectively. The I p values of the cross-linked V1382 films are smaller than the valence band (VB) of typical perovskite materials such as CH 3 NH 3 PbI 3 (MAPbI 3 , VB = 5.45 eV) or Cs 0.05 FA 0.80 MA 0.15 PbI 2.75 Br 0.25 (FA: formamidinium, VB = 5.56 eV) 34 and larger than those of conventional HTMs such as PTAA or Spiro-OMeTAD . As shown in energy-level diagrams of both p–i–n and n–i–p PSC devices ( Figure S3 ), compared to conventional HTMs, the smaller energy-level offset between the cross-linked V1382 and the perovskite suggests that more efficient hole transfer could be expected for the cross-linked V1382 . X-ray photoelectron spectroscopy (XPS) measurements were carried out to prove the interaction between the cross-linked V1382 /dithiol and the perovskite (Cs 0.05 FA 0.80 MA 0.15 PbI 2.75 Br 0.25 ). Figure S4 presents the XPS spectra of Pb 4 f peaks in the pristine perovskite film and the perovskite film with cross-linked polymer surface modification. Compared to the pristine film, the peaks of Pb 4f 7/2 and Pb 4f 5/2 in the modified perovskite film shifted 0.3 eV to a higher binding energy, implying an interaction between the cross-linked V1382 /dithiol and the perovskite surface. This could benefit solar cell operational stability. To evaluate the efficacy of the HTM formed by the cross-linking between V1382 and 4,4′-thiobisbenzenethiol (named cross-linked V1382 /dithiol) in PSCs, both p–i–n devices [fluorine-doped tin oxide (FTO)/HTM/perovskite/ethylenediammonium diiodide (EDAI 2 )/C 60 /bathocuproine (BCP)/Ag] ( Figure 4 a) and n–i–p devices [indium tin oxide (ITO)/SnO 2 /perovskite/(with or without HTM interlayer)/ Spiro-OMeTAD /Au] ( Figure 4 b) were fabricated. In the p–i–n PSCs, EDAI 2 was used as a post-treatment for the perovskite surface to improve the cell voltages. 55 A triple cation perovskite (Cs 0.05 FA 0.80 MA 0.15 PbI 2.75 Br 0.25 ) with a bandgap of 1.56 eV was selected as the light absorber material. 34 , 56 Cross-linked V1382 /dithiol was used as the HTM and the HTM interlayer in p–i–n and n–i–p PSCs, respectively. The details for the device fabrication are provided in the Supporting Information . The current–voltage ( J–V ) curves of devices were measured under AM 1.5G illumination at 100 mW cm –2 , and detailed device parameters are listed in Table 2 . In the p–i–n PSCs, all HTMs are used without any dopants or additives. Devices with PTAA as the HTM were also fabricated as references. The performance of the cross-linked V1382 /dithiol-based devices with different concentrations of V1382 (0.125–4.0 mg mL –1 ) is presented in Table S2 and Figures S5–S9 in the Supporting Information. The morphology of the perovskite films was characterized with the help of scanning electron microscopy (SEM) ( Figures S10 and S11 ). All of the perovskite layers are smooth and pinhole-free, indicating that the perovskite films are not significantly affected by the concentration of V1382 used for cross-linking. The concentration of V1382 used for cross-linking with dithiol was optimized to be 2.0 mg mL –1 . Devices with cross-linked V1382 /dithiol fabricated by using <2.0 mg mL –1 of V1382 exhibited a lower open-circuit voltage and a larger hysteresis, while those using >2.0 mg mL –1 of V1382 showed a lower fill factor. Under the optimized conditions, in the forward scan, the cross-linked V1382 /dithiol-based p–i–n devices exhibited a short-circuit current density ( J SC ) of 23.0 mA cm –2 , an open-circuit voltage ( V OC ) of 1.09 V, and a fill factor (FF) of 0.77, resulting in a PCE of 19.3%. The J SC values derived from the J–V measurements were consistent with the values integrated from the incident photon-to-current efficiency (IPCE) spectra ( Figures S6–S8 ). Compared to the reference devices based on PTAA ( Figures 4 c, S12, and S13 ), the cross-linked V1382 /dithiol-based devices showed comparable PCE (19.3 vs 19.3%), higher V OC (1.09 vs 1.05 V), and smaller hysteresis (−0.027 vs −0.090). The higher V OC of the cross-linked V1382 /dithiol-based devices could be attributed to the larger ionization potential (or deeper HOMO energy level), resulting in a better energy alignment with the VB of the perovskite material. To compare the operational stability of p–i–n devices using cross-linked V1382 /dithiol and reference PTAA HTMs, maximum power point tracking (MPPT) was carried out under AM 1.5G in an inert atmosphere ( Figure 4 e). The PCE of the PTAA -based reference device degraded to 80% of its initial value after 30 h. In contrast, the cross-linked V1382 /dithiol-based device still retained 84% of the initial output after 200 h, indicating the superior long-term stability of the cross-linked HTMs. In addition, Figure S14 shows the much improved thermal stability of the unencapsulated device using the cross-linked V1382 /dithiol, which remained at 91% of its initial PCE after being heated at 85 °C in air for 50 h under a relative humidity of 40%, while the PCE of the PTAA -based device dropped to 76% of its initial value. The electrical properties of the devices were investigated with the aid of impedance spectroscopy (AM 1.5G, zero applied bias; Figure S15 ). The data are analyzed with a simple equivalent circuit comprising series and parallel resistances together with a parallel capacitance element. At low bias voltages, the parallel resistance is determined by recombination and/or leakage currents, with larger values indicating either better quality of the perovskite layer or more efficient charge extraction from the perovskite absorber. The parallel resistance of the cross-linked V1382 /dithiol-based device was estimated to be 228 cm 2 , higher than that of the PTAA -based device (152 Ω cm 2 ). It indicates that the interfacial recombination could be suppressed in the case of the device with cross-linked V1382 /dithiol. The results are in good agreement with the trend in V OC . The effect of the cross-linked V1382 /dithiol as the interlayer between the perovskite layer and Spiro-OMeTAD on the performance of the n–i–p PSCs was investigated. In this case, Spiro-OMeTAD was doped with LiTFSI, the Co(III) complex, and t BP. Devices using the doped Spiro-OMeTAD without the interlayer were also fabricated as reference n–i–p devices ( Figure S16 ). The concentration of V1382 on the cross-linking precursor used for the interlayer was optimized and determined to be 1.0 mg mL –1 ( Table S3, and Figures S17–S20 ). As shown in Figure 4 d, after optimization, the device with the cross-linked interlayer exhibited a PCE of 19.1% with a J SC of 22.4 mA cm –2 , a V OC of 1.10 V, and an FF of 0.77 in the forward scan. For the reference device without the interlayer, slight drops in V OC and PCE were observed ( V OC = 1.08 V and PCE = 18.9%). It implies that by inserting the cross-linked V1382 /dithiol interlayer, the interfacial recombination could be suppressed. As confirmed by impedance spectroscopy ( Figure S21 ), the parallel resistance of the device increased from 71 to 252 Ω cm 2 after inserting the cross-linked V1382 /dithiol interlayer, supporting the above statement. The operational stability of the devices was assessed by running them at the maximum power point under AM 1.5G for 24 h. As shown in Figure 4 f, the PCE of the reference device degraded to 60% of its initial value after 16 h, while the device with the cross-linked interlayer still maintained 84% of its initial output after 24 h. In addition, a thermal durability test on the unencapsulated devices was also carried out under an ambient atmosphere. The results are listed in Figure S22 . After heating the devices at 100 °C for 1 h, the efficiency of the reference device without the interlayer dropped to 58% of the initial efficiency. In contrast, under the same conditions, the efficiency of the device using the cross-linked V1382 /dithiol interlayer still retained 71%. Since the cross-linked V1382 /dithiol with a water contact angle of 69° ( Figure S23 ) shows similar hydrophobicity to doped Spiro-OMeTAD , 57 , 58 the better stability of the cross-linked V1382 /dithiol-based PSCs could be attributed to the suppression of the metal diffusion 59 and the interfacial defect passivation, 60 , 61 caused by the insertion of the sulfur-rich interlayer. In order to investigate the interfacial charge transfer kinetics, steady-state photoluminescence (PL) quenching and time-resolved PL (TRPL) decay on the perovskite films deposited on quartz, PTAA , and cross-linked V1382 /dithiol were conducted. 62 As shown in Figure 5 a, after fabricating perovskite on HTM layers, the PL peak intensity was reduced, falling to 56 and 35% for PTAA and cross-linked V1382 /dithiol, respectively. The TRPL lifetime for the pristine perovskite film was found to be 196 ns, and the TRPL lifetime for HTM/perovskite films decreased to 120 and 75 ns for PTAA and cross-linked V1382 /dithiol, respectively ( Figure 5 b). The stronger PL quenching together with the shorter PL lifetime indicates that cross-linked V1382 /dithiol has a better hole extraction ability than PTAA . There is no significant difference between the PL properties of perovskite/ Spiro-OMeTAD and perovskite/cross-linked V1382 interlayer/ Spiro-OMeTAD ( Figure S24 and Table S4 ).
Results and Discussion The polymer precursor V1382 , which possesses a 9,9′-spirobifluorene core and four vinyl cross-linkable groups, was synthesized in a facile 2-step synthetic procedure with commercially available starting materials as shown in Scheme 1 . During the first step, the palladium-catalyzed Buchwald–Hartwig amination reaction of 2,2′,7,7′-tetrabromo-9,9′-spirobifluorene and p -anisidine was carried out to give aminated precursor 1 in 70% yield. Compound 1 was then vinyl-functionalized by using 4-bromostyrene to generate the target product V1382 in 51% yield. Structures of the synthesized compounds were characterized by nuclear magnetic resonance (NMR), elemental analysis (EA), and mass spectrometry (MS). The total cost for V1382 is estimated to be 42 € g –1 , much cheaper than widely used HTMs, 50 indicating its strong potential for large-scale manufacturing processes ( Table S1 ). Detailed synthesis procedures and analysis data are given in the Supporting Information . The thermal properties of V1382 and its cross-linking reaction with 4,4′-thiobisbenzenethiol were investigated by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC). The decomposition temperature corresponding to 5% weight loss ( T dec ) of V1382 was estimated from the TGA curve to be 460 °C ( Figure S1 ), confirming that V1382 has good thermal stability. As shown in Figure 1 a, an exothermic peak was detected at 253 °C during the first scan, while no distinct phase transition could be observed until 350 °C in the second heating scan, suggesting that thermal cross-linking of V1382 occurs at 253 °C. In contrast, after mixing V1382 with a dithiol cross-linker, 4,4′-thiobisbenzenethiol, in a molar ratio of 1:2, the exothermic peak shifted to the region of 103–120 °C, and the cross-linking temperature ( T poly ) was detected at 107 °C ( Figure 1 b). The results imply that the fast thermal cross-linking occurs due to the facile thiol–ene “click” reaction. It is worth noting that this is the lowest cross-linking temperature reported in the PSC field, 40 − 52 enabling the application in both p–i–n and n–i–p PSC architectures. To evaluate the optical properties of V1382 and formed polymers, ultraviolet–visible (UV–vis) absorption and photoluminescence (PL) spectra were measured from tetrahydrofuran (THF) solutions and thin films. The results are shown in Figure S2 and summarized in Table 1 . The absorption maxima (λ abs ) of V1382 were observed at 336 and 395 nm. The less intense absorption peak at 336 nm can be assigned to the π–π* transition, while the more intense absorption peak at 395 nm corresponds to the n –π* transition. After polymerization of V1382 at 255 °C, wide absorption band ranging from 275 to 450 nm was observed, while after thermal cross-linking using dithiol at 103 °C, the absorption spectra of the polymer had two main peaks at around 303 and 383 nm. In addition, the emission maxima (λ em ) of V1382 were observed at 419 and 441 nm with a Stokes shift value of 24 nm. V1382 films with and without 4,4′-thiobisbenzenethiol cross-linker (molar ratio = 1:2) were prepared by spin-coating the corresponding materials in THF solutions ( V1382 20 mg mL –1 ). The ability to form insoluble cross-linked networks was evaluated by measuring the UV–vis absorption of these spin-coated films. The results are shown in Figure 2 a,b. After annealing the films of V1382 without and with the dithiol cross-linker for only 15 min at 255 and 103 °C, respectively, and rinsing with THF several times to remove soluble parts, absorbance from the films was still detected. It indicates that the cross-linking of these films occurred under these conditions, resulting in good solvent-resistant films. We note that such rapid cross-linking is quite unusual for thiol–ene-type polymerization according to our previously reported works, 42 , 44 suggesting that spiro configuration might be sterically or energetically favorable for this type of reaction. A longer time frame was used to quantitatively cross-link the films. The cross-linking process in both cases was completed after annealing for 60 min. Fourier transform infrared (FTIR) spectra ( Figure 2 c) were recorded to ascertain the occurrence of the cross-linking. After V1382 cross-linking with dithiol at 103 °C, the peak of S–H stretching vibration at 2520 cm –1 and the peak of C=C stretching vibration at 1625 cm –1 disappeared compared with the peaks before heating confirming that fast thermal cross-linking occurs after heating V1382 with a dithiol cross-linker at 103 °C. The hole-transport properties of the HTMs were characterized with the aid of xenographic time-of-flight (XTOF) measurements ( Figure 3 a). At zero field strength, V1382 demonstrates a hole-drift mobility of 8.7 × 10 –5 cm 2 V –1 s –1 . After thermal annealing, regardless of using the dithiol cross-linker, the hole mobilities of cross-linked films slightly reduce to 1.3 × 10 –5 cm 2 V –1 s –1 , yet are still comparable to those of popular HTMs for PSCs. 53 , 54 In addition, the solid-state ionization potential ( I p ) of V1382 and the cross-linked films were measured by using photoelectron spectroscopy in air (PESA). As shown in Figure 3 b, the ionization potential of the V1382 film was measured to be 5.29 eV. The I p values slightly increase to 5.38 and 5.35 eV in the case of cross-linked V1382 without and with 4,4′-thiobisbenzenethiol, respectively. The I p values of the cross-linked V1382 films are smaller than the valence band (VB) of typical perovskite materials such as CH 3 NH 3 PbI 3 (MAPbI 3 , VB = 5.45 eV) or Cs 0.05 FA 0.80 MA 0.15 PbI 2.75 Br 0.25 (FA: formamidinium, VB = 5.56 eV) 34 and larger than those of conventional HTMs such as PTAA or Spiro-OMeTAD . As shown in energy-level diagrams of both p–i–n and n–i–p PSC devices ( Figure S3 ), compared to conventional HTMs, the smaller energy-level offset between the cross-linked V1382 and the perovskite suggests that more efficient hole transfer could be expected for the cross-linked V1382 . X-ray photoelectron spectroscopy (XPS) measurements were carried out to prove the interaction between the cross-linked V1382 /dithiol and the perovskite (Cs 0.05 FA 0.80 MA 0.15 PbI 2.75 Br 0.25 ). Figure S4 presents the XPS spectra of Pb 4 f peaks in the pristine perovskite film and the perovskite film with cross-linked polymer surface modification. Compared to the pristine film, the peaks of Pb 4f 7/2 and Pb 4f 5/2 in the modified perovskite film shifted 0.3 eV to a higher binding energy, implying an interaction between the cross-linked V1382 /dithiol and the perovskite surface. This could benefit solar cell operational stability. To evaluate the efficacy of the HTM formed by the cross-linking between V1382 and 4,4′-thiobisbenzenethiol (named cross-linked V1382 /dithiol) in PSCs, both p–i–n devices [fluorine-doped tin oxide (FTO)/HTM/perovskite/ethylenediammonium diiodide (EDAI 2 )/C 60 /bathocuproine (BCP)/Ag] ( Figure 4 a) and n–i–p devices [indium tin oxide (ITO)/SnO 2 /perovskite/(with or without HTM interlayer)/ Spiro-OMeTAD /Au] ( Figure 4 b) were fabricated. In the p–i–n PSCs, EDAI 2 was used as a post-treatment for the perovskite surface to improve the cell voltages. 55 A triple cation perovskite (Cs 0.05 FA 0.80 MA 0.15 PbI 2.75 Br 0.25 ) with a bandgap of 1.56 eV was selected as the light absorber material. 34 , 56 Cross-linked V1382 /dithiol was used as the HTM and the HTM interlayer in p–i–n and n–i–p PSCs, respectively. The details for the device fabrication are provided in the Supporting Information . The current–voltage ( J–V ) curves of devices were measured under AM 1.5G illumination at 100 mW cm –2 , and detailed device parameters are listed in Table 2 . In the p–i–n PSCs, all HTMs are used without any dopants or additives. Devices with PTAA as the HTM were also fabricated as references. The performance of the cross-linked V1382 /dithiol-based devices with different concentrations of V1382 (0.125–4.0 mg mL –1 ) is presented in Table S2 and Figures S5–S9 in the Supporting Information. The morphology of the perovskite films was characterized with the help of scanning electron microscopy (SEM) ( Figures S10 and S11 ). All of the perovskite layers are smooth and pinhole-free, indicating that the perovskite films are not significantly affected by the concentration of V1382 used for cross-linking. The concentration of V1382 used for cross-linking with dithiol was optimized to be 2.0 mg mL –1 . Devices with cross-linked V1382 /dithiol fabricated by using <2.0 mg mL –1 of V1382 exhibited a lower open-circuit voltage and a larger hysteresis, while those using >2.0 mg mL –1 of V1382 showed a lower fill factor. Under the optimized conditions, in the forward scan, the cross-linked V1382 /dithiol-based p–i–n devices exhibited a short-circuit current density ( J SC ) of 23.0 mA cm –2 , an open-circuit voltage ( V OC ) of 1.09 V, and a fill factor (FF) of 0.77, resulting in a PCE of 19.3%. The J SC values derived from the J–V measurements were consistent with the values integrated from the incident photon-to-current efficiency (IPCE) spectra ( Figures S6–S8 ). Compared to the reference devices based on PTAA ( Figures 4 c, S12, and S13 ), the cross-linked V1382 /dithiol-based devices showed comparable PCE (19.3 vs 19.3%), higher V OC (1.09 vs 1.05 V), and smaller hysteresis (−0.027 vs −0.090). The higher V OC of the cross-linked V1382 /dithiol-based devices could be attributed to the larger ionization potential (or deeper HOMO energy level), resulting in a better energy alignment with the VB of the perovskite material. To compare the operational stability of p–i–n devices using cross-linked V1382 /dithiol and reference PTAA HTMs, maximum power point tracking (MPPT) was carried out under AM 1.5G in an inert atmosphere ( Figure 4 e). The PCE of the PTAA -based reference device degraded to 80% of its initial value after 30 h. In contrast, the cross-linked V1382 /dithiol-based device still retained 84% of the initial output after 200 h, indicating the superior long-term stability of the cross-linked HTMs. In addition, Figure S14 shows the much improved thermal stability of the unencapsulated device using the cross-linked V1382 /dithiol, which remained at 91% of its initial PCE after being heated at 85 °C in air for 50 h under a relative humidity of 40%, while the PCE of the PTAA -based device dropped to 76% of its initial value. The electrical properties of the devices were investigated with the aid of impedance spectroscopy (AM 1.5G, zero applied bias; Figure S15 ). The data are analyzed with a simple equivalent circuit comprising series and parallel resistances together with a parallel capacitance element. At low bias voltages, the parallel resistance is determined by recombination and/or leakage currents, with larger values indicating either better quality of the perovskite layer or more efficient charge extraction from the perovskite absorber. The parallel resistance of the cross-linked V1382 /dithiol-based device was estimated to be 228 cm 2 , higher than that of the PTAA -based device (152 Ω cm 2 ). It indicates that the interfacial recombination could be suppressed in the case of the device with cross-linked V1382 /dithiol. The results are in good agreement with the trend in V OC . The effect of the cross-linked V1382 /dithiol as the interlayer between the perovskite layer and Spiro-OMeTAD on the performance of the n–i–p PSCs was investigated. In this case, Spiro-OMeTAD was doped with LiTFSI, the Co(III) complex, and t BP. Devices using the doped Spiro-OMeTAD without the interlayer were also fabricated as reference n–i–p devices ( Figure S16 ). The concentration of V1382 on the cross-linking precursor used for the interlayer was optimized and determined to be 1.0 mg mL –1 ( Table S3, and Figures S17–S20 ). As shown in Figure 4 d, after optimization, the device with the cross-linked interlayer exhibited a PCE of 19.1% with a J SC of 22.4 mA cm –2 , a V OC of 1.10 V, and an FF of 0.77 in the forward scan. For the reference device without the interlayer, slight drops in V OC and PCE were observed ( V OC = 1.08 V and PCE = 18.9%). It implies that by inserting the cross-linked V1382 /dithiol interlayer, the interfacial recombination could be suppressed. As confirmed by impedance spectroscopy ( Figure S21 ), the parallel resistance of the device increased from 71 to 252 Ω cm 2 after inserting the cross-linked V1382 /dithiol interlayer, supporting the above statement. The operational stability of the devices was assessed by running them at the maximum power point under AM 1.5G for 24 h. As shown in Figure 4 f, the PCE of the reference device degraded to 60% of its initial value after 16 h, while the device with the cross-linked interlayer still maintained 84% of its initial output after 24 h. In addition, a thermal durability test on the unencapsulated devices was also carried out under an ambient atmosphere. The results are listed in Figure S22 . After heating the devices at 100 °C for 1 h, the efficiency of the reference device without the interlayer dropped to 58% of the initial efficiency. In contrast, under the same conditions, the efficiency of the device using the cross-linked V1382 /dithiol interlayer still retained 71%. Since the cross-linked V1382 /dithiol with a water contact angle of 69° ( Figure S23 ) shows similar hydrophobicity to doped Spiro-OMeTAD , 57 , 58 the better stability of the cross-linked V1382 /dithiol-based PSCs could be attributed to the suppression of the metal diffusion 59 and the interfacial defect passivation, 60 , 61 caused by the insertion of the sulfur-rich interlayer. In order to investigate the interfacial charge transfer kinetics, steady-state photoluminescence (PL) quenching and time-resolved PL (TRPL) decay on the perovskite films deposited on quartz, PTAA , and cross-linked V1382 /dithiol were conducted. 62 As shown in Figure 5 a, after fabricating perovskite on HTM layers, the PL peak intensity was reduced, falling to 56 and 35% for PTAA and cross-linked V1382 /dithiol, respectively. The TRPL lifetime for the pristine perovskite film was found to be 196 ns, and the TRPL lifetime for HTM/perovskite films decreased to 120 and 75 ns for PTAA and cross-linked V1382 /dithiol, respectively ( Figure 5 b). The stronger PL quenching together with the shorter PL lifetime indicates that cross-linked V1382 /dithiol has a better hole extraction ability than PTAA . There is no significant difference between the PL properties of perovskite/ Spiro-OMeTAD and perovskite/cross-linked V1382 interlayer/ Spiro-OMeTAD ( Figure S24 and Table S4 ).
Conclusions In summary, a low-cost 9,9′-spirobifluorene derivative bearing four vinyl groups ( V1382 ) was designed and synthesized. Due to the presence of vinyl groups, V1382 can undergo thermal cross-linking at 255 °C to form a solvent-resistant polymeric network. Importantly, by mixing V1382 with 4,4′-thiobisbenzenethiol in a molar ratio of 1:2, the cross-linking temperature can occur at 103 °C via a facile thiol–ene reaction. The cross-linked V1382 /dithiol film exhibits appropriate hole mobility and ionization potential, implying its potential as an HTM in PSCs. Taking advantage of the low cross-linking temperature, the cross-linked V1382 /dithiol can be used as the HTM and HTM interlayer in p–i–n and n–i–p PSC devices, respectively. Devices with the cross-linked V1382 /dithiol were found to show suppressed interfacial recombination, resulting in better power conversion efficiencies and operational stability than devices using conventional hole-transporting materials such as PTAA and Spiro-OMeTAD .
A novel 9,9′-spirobifluorene derivative bearing thermally cross-linkable vinyl groups ( V1382 ) was developed as a hole-transporting material for perovskite solar cells (PSCs). After thermal cross-linking, a smooth and solvent-resistant three-dimensional (3D) polymeric network is formed such that orthogonal solvents are no longer needed to process subsequent layers. Copolymerizing V1382 with 4,4′-thiobisbenzenethiol (dithiol) lowers the cross-linking temperature to 103 °C via the facile thiol–ene “click” reaction. The effectiveness of the cross-linked V1382 /dithiol was demonstrated both as a hole-transporting material in p–i–n and as an interlayer between the perovskite and the hole-transporting layer in n–i–p PSC devices. Both devices exhibit better power conversion efficiencies and operational stability than devices using conventional PTAA or Spiro-OMeTAD hole-transporting materials.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c13950 . Equipment and characterization; detailed synthetic procedures, DSC, UV–vis, PESA, XPS, and SEM data; detailed photovoltaic parameters along with J–V curves; and PL data ( PDF ) Supplementary Material Author Contributions ∥ S.D.-G. and M.A.T contributed equally to this work. The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. The authors declare no competing financial interest. Acknowledgments This work was funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or CINEA. Neither the European Union nor the granting authority can be held responsible for them. VALHALLA project has received funding from Horizon Europe Research and Innovation Action program under Grant Agreement no. 101082176. This work was also supported by the Japan Society for the Promotion of Science (JP20K22531, J22K14744, and JP21H04699), a research grant from the Iwatani Naoji Foundation, the Mazda Foundation, and JSPS Fellows (21J23253). The authors thank Yasuko Iwasaki (ICR, Kyoto University) for the SEM measurements. They also thank Prof. Toshiyuki Nohira and Dr. Takayuki Yamamoto (ICR, Kyoto University) for XPS measurements.
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 20; 16(1):1206-1216
oa_package/7a/e2/PMC10788832.tar.gz
PMC10788833
38109313
Introduction In the field of lighting and displays, the discovery of the blue-light-emitting diode (LED) marks the beginning of a revolution. Converting a part of the blue LED output to longer wavelength light (green, yellow, orange, or red) by a luminescent material (a phosphor) makes it possible to realize compact and efficient white light sources with a great flexibility in spectral distribution. The desired characteristics of a phosphor depend on the application. Lighting requires white LEDs (wLEDs) with a high efficacy (lumen/W output) and a high color rendering index (CRI). The high brightness of wLEDs can be realized only using phosphors with emitters that have a high turnover rate (photons/s). In addition, the luminescence quenching temperature has to be high as the device locally heats up to 150 ° C. In displays, brightness is lower, and this sets less stringent requirements on turnover rate and stability. However, phosphors with saturated colors with emission at specific wavelengths are preferred to extend the color gamut while remaining efficient. A successful red-emitting phosphor, especially for displays, is K 2 SiF 6 :Mn 4+ (KSF). The Mn 4+ ion has the 3d 3 configuration, and in fluorides, it shows a sharp line emission around 620 nm due to vibronic 2 E → 4 A 2 transitions. The narrow spectral distribution is ideal for display applications. The luminescence of KSF was reported already back in 1973, but it was not until 2006 that its potential in LED lighting and displays was realized. 1 , 2 The spectral properties of KSF are superior to those of other red LED phosphors such as CaAlSiN 3 :Eu 2+ (CASN). The broad band red Eu 2+ emission extends toward the NIR, where the eye sensitivity is low. This reduces the lumen/W output. The narrow line emission around 620 nm of Mn 4+ helps to extend the color gamut in displays. Unfortunately, for higher power applications, KSF is less suitable, because the long emission lifetime (∼10 ms) for the parity- and spin-forbidden 2 E → 4 A 2 transition limits the turnover rate to ∼100 photons/s per Mn 4+ ion and, thus, lowers the external quantum yield (EQY) at higher photon fluxes. The popular KSF phosphor has a cubic crystal structure, and the Mn 4+ ion is in a centrosymmetric octahedral coordination of fluoride ions. The inversion symmetry makes the parity selection rule strict, and it can only be lifted by coupling with odd-parity vibrations. As a result, the sharp emission lines observed are vibronic lines corresponding to ungerade vibrations that induce odd-parity crystal field components. The strictly forbidden zero-phonon line (ZPL) is not observed, and the luminescence lifetime of the 2 E state is long. Later, other Mn 4+ -doped fluoride hosts were found with hexagonal or trigonal crystal structures. 3 − 5 The lower local symmetry for the Mn 4+ ion results in the appearance of a ZPL (induced by static odd-parity crystal field components) in addition to vibronic lines. 6 The emission lifetime for Mn 4+ in these hosts is shorter, and the higher eye sensitivity at the ZPL wavelength is also beneficial for the efficacy. Unfortunately, for all of these hosts, the luminescence quenching temperature and/or stability were low, and they could not replace KSF, in spite of the superior spectral properties. There has been a search for Mn 4+ phosphors similar to KSF but with a lower symmetry crystal structure to decrease the lifetime and induce a ZPL. Such a phosphor is reported here: hexagonal (K,Rb)SiF 6 :Mn 4+ (h-KRSF:Mn 4+ ). Normally, K 2 SiF 6 and Rb 2 SiF 6 , as well as their solid solutions form a cubic phase. Yet, here we show that under specific reaction conditions, the mixed solid solutions can form in a stable hexagonal phase. Interestingly, the existence of a hexagonal crystal structure for KSF or KRSF has sporadically been reported. The crystal structure was described by Kolditz and Preiss in 1963, 7 referring back to earlier reports from 1904. 8 In the mineralogy of fumaroles, grains of 0.3 mm have been reported with the chemical formula K 2 SiF 6 and hexagonal symmetry, 9 while in 1952, hexagonal KSF was found when analyzing chemicals of a decommissioned chimney that was used to drain sulfuric acid and hydrogen fluoride gases. 10 There are also a few recent examples of the hexagonal form of KSF. In 2015, the luminescence of KSF:Mn 4+ was measured at increasing pressure. Between 9 and 13 kbar, a strong ZPL arises that does not disappear after decompression. This could be due to the formation of nanocrystalline hexagonal KSF, but XRD measurements after decompression did not indicate a cubic-to-hexagonal phase transformation. 11 In 2014, hexagonal KSF:Mn 4+ was synthesized, but no luminescence was observed. 12 So far, no reports have been made that measured and verified Mn 4+ in hexagonal KSF. For h-KRSF, there is one patent that reports the existence and luminescence of this phase and describes the synthesis of KRSF:Mn 4+ as a phosphor. 13 In this paper, we describe the reproducible synthesis of hexagonal KRSF:Mn 4+ . We report the improved luminescence properties induced by the lower site symmetry for Mn 4+ in the hexagonal phase and evaluate the advantageous properties such as a shorter luminescence decay time and a strong ZPL increasing the efficacy of the phosphor. We follow the formation of h-KRSF by measuring the Mn 4+ emission to probe the phase transition from cubic to hexagonal and show how after a long induction period, h-KRSF starts to form, and the transformation rate of h-KRSF increases exponentially with time. Finally, we determined the temperature stability of h-KRSF by measuring the back transformation to the cubic KRSF via temperature-dependent XRD and luminescence measurements.
Methods Synthesis The synthesis procedure for KRSF is inspired by previously reported methods for KSF. As a Mn precursor, K 2 MnF 6 was used. As this is not commercially available because of the low stability, it was synthesized as described by Roesky. 14 Other chemicals used were 48% HF and 30% H 2 SiF 6 solutions from Sigma-Aldrich, KHF 2 from Strem Chemicals, and RbF from Chempur. For the typical synthesis of KRbSiF 6 :0.5% Mn 4+ , 12 mg K 2 MnF 6 , 0.391 g KHF 2 , and 0.523 g RbF were dissolved in 1.5 mL aqueous HF (48 vol %). In a second beaker, 1.5 mL of aqueous 30 wt % H 2 SiF 6 was combined with 5 mL of 48% HF. Upon combining the two solutions, some turbidity was observed. To regain full dissolution of all precursors, ∼20 mL aqueous HF was added until a clear solution was obtained. This solution was added to four times the volume of ethanol (EtOH) (∼100 mL). No precipitate was visible by naked eye, but under illumination with a hand-held violet laser (405 nm), the solution showed red luminescence. This indicates the formation of nanosized KRSF particles. The aqueous EtOH solution was left to evaporate for 2 days to a week in the fume hood. The amount of precipitate gradually increases during evaporation. After all the liquid evaporated, the solid material was washed with 3% H 2 O 2 aqueous solution and subsequently with EtOH, after which it was dried at 100 °C for 1–2 h. The hexagonal KRSF (h-KRSF) synthesized through this procedure contained 20–50 mol % of Rb. The K, Rb, and Mn concentrations in the samples discussed below were measured with ICP-OES, and the values can be found in Section S4 . For comparison, cubic KRSF (c-KRSF) was synthesized. Two different methods were employed. One involved immediate separation by decantation of the precipitate formed directly after the addition of H 2 SiF 6 in the synthesis method described above. The second method was heating the hexagonal KRSF to 400 °C for 30 min. Characterization The powders were examined using powder X-ray diffraction to determine the phase purity. A Bruker D2 PHASER X-ray diffractometer with a Co source (λ Kα = 1.7902 Å) was used at 30 kV operating voltage and 10 mA current. The temperature-dependent X-ray diffraction measurements were performed with a Malvern Panalytical Aeris Research diffractometer equipped with an Anton Paar BTS 500 heating stage and a Cu K α (λ Kα = 1.5418) radiation source. The K, Rb, and Mn concentrations in the phosphors were examined with inductively coupled plasma optical emission spectroscopy (ICP-OES). All measurements were performed on a PerkinElmer Optima 8300DV spectrometer (Mn λ em = 257.610 nm, Rb λ em = 780.023 nm, and K λ em = 766.490 nm). Aqua regia was used to dissolve the phosphors. Optical Spectroscopy Photoluminescence (PL) and PL excitation (PLE) spectra were recorded using an Edinburgh Instruments FLS 920 spectrofluorometer equipped with a 450 W Xe lamp as the excitation source and a Hamamatsu R928 photomultiplier tube (PMT) detector. PL decay curves were recorded using a tunable optical paramagnetic oscillator (OPO) Opotek Opolette HE 355II giving ∼1–5 mJ pulses in the visible or near-infrared (pulse width: 10 ns; repetition rate: 20 Hz) as excitation source and the multichannel scaling (MCS) capabilities included in the Edinburgh spectrofluorometer. For temperature-dependent studies, a temperature-controlled stage from Linkam Scientific (THMS600) was built in the spectrofluorometer for measurements in a −190 to 450 °C temperature range. Measurements down to 4 K were performed with an Oxford Instruments liquid-He cold-finger cryostat. The in situ monitoring of the cubic-to-hexagonal phase transformation was performed with a custom-built optical setup. In short, the beaker containing the reaction mixture was illuminated from above with an OBIS LX 445 nm, 45 mW laser with a fiber pigtail output. An AvaSpec-HSC 1024 × 58 TEC-EVO CCD spectrometer equipped with an optical fiber and a 472 nm long-pass filter was used to collect the red emission on the side of the beaker to measure emission spectra at regular time intervals during the formation (for up to several days). DFT Calculations To assess the stability of the cubic vs. hexagonal phase for KSF, RSF, and KRSF, first-principles total-energy calculations 15 were performed based on density functional theory (DFT) 16 , 17 using the projector augmented wave (PAW) as implemented in the Vienna ab initio simulation package. 18 , 19 Frozen core approximation was combined with PAW, and the valence electron configurations are 3s 2 3p 6 4s 1 for K, 4s 2 4p 6 5s 1 for Rb, 3s 2 3p 2 for Si, and 2s 2 2p 5 for F. Exchange and correlation were treated with the generalized gradient approximation. 20 The wave functions were expanded in a plane-wave basis set with a kinetic energy cut-off of 600 eV. 8 × 8 × 8 and 6 × 6 × 4 Monkhorst-Pack k -point meshes were used for the integration in k space in the Brillouin zone for the cubic and hexagonal unit cells, respectively. The structural optimizations were performed until each component of the interatomic force became less than 1.0 × 10 –3 eV/Å.
Results and Discussion Phase Identification To investigate the crystal structure and phase purity of the different materials, after synthesis, the dry powders were checked by measuring the X-ray diffractograms. In Figure 1 , the diffractograms of the different microcrystalline powders are shown with their respective references underneath. In Figure 1 , we can see that for all samples, there is good agreement with the reference diffraction patterns. This shows that the different synthesis methods result in phase-pure crystalline materials. For cubic KSF and RSF, the crystal structure is well established, and the reference diffractograms are well known. For c-KRSF, the diffraction lines are at angles in between KSF and RSF, as expected for a solid solution. A good agreement with the experimentally observed positions of diffraction lines was obtained by assuming an increase of 2% in lattice distances compared with the KSF reference. A slight increase is expected by the replacement of K by Rb as the ionic radius of Rb + (1.72 Å) is larger than that of K + (1.64 Å), causing a small expansion of the unit cell. 23 The reference pattern of h-KRSF is based on an earlier work on hexagonal KSF. In ref. ( 9 ), the XRD pattern for h-KSF is reported and used to derive lattice parameters a = 5.67 and c = 9.24 and identify two different sites for the K + ion, a smaller M1 and a larger M2 site. The diffraction pattern obtained here for KRSF is very similar. A good match is obtained for slightly larger lattice parameters a = 5.78 and c = 9.42, providing convincing evidence for the formation of hexagonal KRbSiF 6 :Mn 4+ . The powder XRD data do not allow us to distinguish between the ordering of Rb + and K + on the M1 and M2 sites. It will be interesting to obtain high-quality single crystal data to obtain information on site occupation in the mixed crystal. To evaluate the particle size and particle size distribution, we made SEM images of the final product. The SEM image in Figure 2 shows that the synthesis procedure used results in a homogeneous particle size distribution with an average particle size of ∼30 μm. Optical Properties To study the optical properties of Mn 4+ in the new h-KRSF, both PL and PLE spectra were measured for low-doped samples (0.1–0.5% Mn 4+ ). For comparison, the spectra of Mn 4+ in cubic KRSF, KSF, and RSF were measured as well. In Figure 3 , it is observed that all the PLE spectra have two relatively strong and broad excitation bands around 360 and 460 nm. The 460 nm band shows some sharp lines around 470 nm. These can be ascribed to Xe-lamp lines that are visible in spite of correcting the spectra for variations in the Xe-lamp intensity. A zoom-in for the area between 560 and 625 nm shows a multitude of weak and narrow excitation lines. The PLE spectra of the four samples are very similar to one exception: there is a sharp extra peak at 621.5 nm for Mn 4+ in h-KRSF. In the PL spectra ( Figure 3 c) again, all spectra are very similar showing sharp emission lines at the same positions, with small shifts of ∼0.5 nm to longer wavelengths from KSF to RSF. Again there is one exception: an extra peak at 621.5 nm for Mn 4+ in the hexagonal form of KRSF. Based on the Tanabe–Sugano diagram for 3d 3 ions in octahedral symmetry, the excitation bands at 360 and 460 nm in the PLE spectra are assigned to the 4 A 2 → 4 T 1 and 4 A 2 → 4 T 2 transitions, respectively. In the zoom-in spectra, Figure 3 b, the peaks observed from 560 to 595 nm are assigned to the vibronic lines of the 4 A 2 → 2 T 1 transition and from 600 to 625 nm to vibronic excitation lines of the 4 A 2 → 2 E transition in the cubic modifications. For Mn 4+ in inversion symmetry, all 3d 3 → 3d 3 transitions are parity forbidden, and coupling with odd-parity vibrations is required to partly lift the parity selection rule, resulting in the observation of vibronic excitation and emission lines. In h-KRSF, the Mn 4+ ion is in a site with lower symmetry, and static odd-parity crystal field components allow for breaking the parity selection rule. As a result, also the purely electronic zero-phonon transition can be observed. For the 4 A 2 → 2 E transition in h-KRSF, this zero-phonon line (ZPL) is at 621.5 nm and is identical in the excitation and emission spectra. The positions of the vibronic emission lines in KSF ( Figure 3 c) are 597, 608, 613, 630, 635, and 648 nm, in agreement with earlier reports. The lines at 630, 635, and 648 nm are Stokes vibronic lines due to coupling with ν 6 , ν 4 , and ν 3 vibrations. The lines at 597, 608, and 613 nm are anti-Stokes vibronics at the same energy differences from the ZPL (that is hardly observed, except for h-KRSF) as the Stokes lines. The change in local symmetry around the tetravalent ion in cubic KSF to hexagonal KSF is key for understanding the appearance of the ZPL. In the cubic phase, the Si 4+ atom (or the Mn 4+ ) is symmetrically surrounded by six equidistant fluorine ligands at 1.677 Å. In ref. ( 9 ), a Rietveld refinement on the diffraction pattern of the hexagonal phase of KSF shows that there is a slight distortion of the octahedron: three ligands are at a distance of 1.681 Å while the others are at a distance of 1.688 Å. 9 , 21 A similar deviation from inversion symmetry for Mn 4+ can be expected in h-KRSF and explains why for the hexagonal phase a zero phonon line is observed and not for the cubic phases. Again, it will be interesting to obtain single crystal data to determine the deviation from octahedral coordination for the [MF 6 ] 2– units in the K + /Rb + mixed crystal and compare this with other Mn 4+ fluoride hosts where a ZPL is observed. The enhanced ZPL is beneficial for the performance. The additional emission at ∼620 nm where the eye sensitivity is higher, increases the efficacy. The luminous response function has its maximum at 550 nm and drops to 1% of the maximum at 680 nm. A higher fraction of the emission spectrum toward longer wavelengths reduces the efficacy. If we compare c-KRSF to h-KRSF, a smaller fraction of the emission is from the Stokes emission lines of 630, 636, and 648 nm. The additional emission intensity at ∼620 nm results in an efficacy increase of 2.9% for h-KRSF compared with c-KRSF (see Section S1 and Figure S1 ). In addition, deviation from the inversion symmetry also increases the 4 A 2 → 4 T 2 absorption strength for the blue excitation wavelength at 450 nm as a result of relaxation of the parity selection rule. The increased absorption strength at 450 nm is experimentally observed to be ∼34% by comparing the emission intensity of c-KRSF with h-KRSF under the same excitation intensity (see Section S1 and Figures S1 and S2 ). To evaluate the efficiency of the new h-KRSF, phosphor quantum yield measurements were done. A sample with 1.8 mol % Mn incorporated had an internal quantum yield of 91%. We consider this value to be very high as little effort was put into optimizing the synthesis. For practical applications, a wLED phosphor needs to be resilient to high temperatures and a humid atmosphere. To test the stability, the luminescence of h-KRSF was measured after synthesis, and this was compared to the luminescence after 48 h exposure to 85% humidity at 85 °C. A KSF phosphor was measured simultaneously. For the h-KRSF, a decrease in luminescence of 16% was seen, which is considerably worse than that of the KSF, which showed a loss of 1–2%. The relatively fast degradation of h-KRSF compared with KSF is attributed to the incorporation of Rb. Rb compounds tend to be more hygroscopic than K compounds, thus, enhancing the degradation. 24 For practical application, the stability needs to be improved, e.g., by postsynthesis treatment, overcoating, and/or encapsulation in a protective matrix using strategies that are also explored for KSF. 25 , 26 Reducing the Rb content from 50% to 20% (a Rb concentration for which the hexagonal phase can still be obtained, vide infra ) may also enhance the stability. Furthermore, optimization is required to explore the potential of h-KRSF as a new LED phosphor. An initial test with h-KRSF phosphor in a w-LED shows promising results with a performance that is similar to that of a wLED with KSF (see Section S2 ). Concentration-Dependent Luminescence The 450 nm absorption by Mn 4+ in the 4 A 2 → 4 T 1 absorption band involves a spin-allowed, but parity-forbidden transition. As discussed above, the deviation from inversion symmetry in h-KRSF is expected to make the absorption stronger than that in c-KRSF or KSF, but this absorption is still much weaker than for fully allowed transitions such as the 4f n → 4f n –1 5d transition in Ce 3+ or Eu 2+ . A high Mn 4+ concentration is, thus, beneficial for reducing the amount of phosphor required to absorb sufficient blue LED light in a wLED. At the same time, a high dopant concentration can lead to concentration quenching. Energy transfer between neighboring ions will cause migration of the excitation energy over the dopant sublattice. Especially above the percolation point (where a 3D connected lattice of dopant ions is realized), the migrating excitation energy can probe a large volume in which there is a high probability to encounter a defect or impurity quenching site causing concentration quenching. Investigating the concentration dependence of the luminescence efficiency is therefore important, and a concentration series of h-KRSF:Mn 4+ x % ( x = 0.1–10) was synthesized (see Section S3 for the XRD patterns). It is important to realize that the fraction of Mn 4+ in the synthesis mixture is not the same as the fraction incorporated in the h-KRSF. Indeed, after evaporating the EtOH out of the reaction mixture, darker colored spots are visible within the dry powder. Washing with H 2 O 2 removes these spots. Probably these spots were compounds with a high concentration of Mn that dissolve in H 2 O 2 . 27 This also means that the fraction of Mn 4+ incorporated in h-KRSF is lower than the nominal concentration. To check the actual Mn concentration, inductively coupled plasma optical emission spectroscopy (ICP-OES) measurements were done. The measurements show that 16–60% of the added Mn is actually incorporated (see Section S4 ). The concentrations mentioned below always refer to actual concentrations in the phosphors, as determined with ICP-OES. To study the concentration-dependent optical properties, both emission spectra and luminescence decay curves were measured for samples with Mn 4+ concentrations varying between 0.1 and 10 mol %. In Figure 4 a, the emission spectra of samples with different Mn concentrations are shown under 450 nm excitation. The samples were diluted 10× (wt %) with optically inactive BaSO 4 to limit the path length of light through the h-KRSF phosphor and reduce saturation effects in blue light absorption. It can be seen that the intensity increases with an increasing Mn concentration. The integrated intensities as a function of Mn 4+ concentration ( Figure 4 b) show a rapid increase at low concentrations (up to 1% Mn 4+ ), after which it levels off. This nonlinear increase at high dopant concentrations has been observed before and is explained by saturation of blue light absorption. The integrated emission intensities of the undiluted phosphors show an even stronger leveling off with increasing Mn 4+ concentration ( Section S5 ). As the Mn 4+ concentration increases, a substantial part of the blue light is absorbed, and the fraction of absorbed light no longer increases linearly with Mn 4+ concentration, as is also evident from Lambert–Beers’ law. Only for a low value of ε cl (molar extinction coefficient × concentration × path length), the fraction of absorbed light increases linearly with concentration. This makes it difficult to determine if concentration quenching occurs based on concentration-dependent emission intensities. A better method to study concentration quenching is by measuring luminescence lifetimes. In the case of nonradiative loss processes as a result of concentration quenching, the emission lifetime will decrease. Luminescence decay curves of the 630 nm emission after pulsed 450 nm excitation are shown in Figure 4 d. A single exponential decay is observed for all concentrations, and the decay times are constant ∼6.2 ms. The single exponential decay curves and constant decay time indicate that no concentration quenching occurs up to at least 10% Mn 4+ . Temperature-Dependent Luminescence The temperature stability of the luminescence is an important aspect of wLED phosphors. Heat is generated by the LED chip and also by heat dissipation inherent to the conversion of a higher energy blue photon to green or red photons. The local temperature of a phosphor in wLEDs can easily reach 150 °C. The thermal quenching behavior is therefore crucial. Indeed, previously Mn 4+ -doped fluorides have been found where the lower local symmetry also resulted in the desired observation of a strong ZPL and shorter emission lifetime, but the poor thermal quenching behavior made these phosphors unfit for application in wLEDs. 3 , 5 , 28 The thermal quenching behavior of h-KRSF:0.1% Mn 4+ was, therefore, measured and compared with those of cubic KRSF:0.1% Mn 4+ and KSF:0.5% Mn 4+ . The temperature dependence of the integrated emission intensities in the relevant high temperature region 373–700 K is shown in Figure 5 a. The corresponding emission spectra at different temperatures of the three samples are shown in Section S6 . When the temperature increases, the emission intensity remains constant until 450 K, above which it starts to decrease. Measuring emission intensity as a function of temperature to probe thermal quenching can be complicated by intensity variations not related to thermal quenching, for example, when the oscillator strength of the absorption transition is temperature dependent. In addition, practical aspects, such as changes in alignment, collection efficiency, or excitation source intensity, can give rise to intensity variations not related to thermal quenching. A fast and reliable method to determine the thermal quenching temperature is to measure the emission lifetime as a function of temperature. As nonradiative decay sets in, the emission lifetime shortens because the lifetime is the inverse of the sum of radiative and nonradiative decay rates. Therefore, lifetimes were also measured as a function of temperature for h-KRSF:0.1% Mn 4+ , c-KRSF:0.1% Mn 4+ , and KSF:0.5%: Mn 4+ and are shown in Figure 5 b. All the decay curves are single exponential. The lifetimes of the Mn 4+ emission in the three different host lattices are shown as a function of temperature in Figure 5 c. For all three host matrices, it can be seen that the lifetime decreases slowly up until 450–480 K after which the lifetime drops sharply, consistent with the temperature-dependent intensity measurements. Before discussing the luminescence quenching temperature, it is interesting to discuss differences in lifetimes for Mn 4+ emission in the three compounds: the lifetime is longer for KSF and c-KRSF than that for h-KRSF. As discussed above, the perfect octahedral coordination in the two cubic lattices imposes a strict parity selection rule. This does not only prevent the observation of a ZPL but also reduces the overall transition probability as the ZPL transition is forbidden. The room temperature emission lifetime is ∼6 ms for Mn 4+ in h-KRSF vs. ∼8 ms in the cubic lattices. The shorter lifetime in h-KRSF is beneficial for application in wLEDs. As mentioned in Introduction , the long emission lifetime is a limiting factor in the total light output and prevents the application of KSF in high-brightness wLEDs. The 25% shorter lifetime helps to improve the performance of h-KRSF in higher brightness sources although the lifetime is still long compared to that for emission in other wLED phosphors, relying on d–f emission from Ce 3+ (∼40–80 ns) or Eu 2+ (∼1–2 μs). In Figure 5 a, it is observed that the luminescence intensity is constant until 450 K, while the lifetime decreases gradually with the temperature between 100 and 400 K ( Figure 5 c). This is an indication that the change in emission lifetime is not caused by temperature quenching. This is generally observed for the 2 E emission of Mn 4+ and explained by an increase in vibronic transition probabilities induced by a higher phonon occupation number n . It is well-established that the transition probability for Stokes vibronics scales with ( n + 1) and anti-Stokes vibronics with n . 28 The corresponding change in radiative lifetime as a function of temperature is described by Here, τ r ( T ) is the radiative lifetime at temperature T (in K), h ν is the effective phonon energy, and k b is the Boltzmann constant. This equation describes the emission lifetime before temperature quenching sets in at 450 K. Temperature quenching for Mn 4+ has been shown to occur via the 4 T 2 state with an activation energy Δ E . Together with temperature dependence for the radiative decay time τ r ( T ) from eq 1 , the expression for the lifetime as a function of temperature is where τ nr is the nonradiative decay time, which is typically in the order of picoseconds, the time scale of vibrations. We can now use eqs 1 and 2 to find the quenching temperature T 50 , defined as the temperature at which τ( T ) = (1/2)τ r ( T ) . The T 50 temperatures determined in this way for KSF, c-KRSF, and h-KRSF were found to be 530, 510, and 503 K, respectively. All temperatures are sufficiently high to prevent thermal quenching in wLEDs. There is a small decrease in T 50 from KSF to KRSF. The thermal luminescence quenching mechanism has been shown to occur by thermal crossover from the 2 E state to the 4 A 2 state via the 4 T 2 state. The lower the energy of the 4 T 2 state, the lower the quenching temperature will be. The slightly lower T 50 values for KRSF are consistent with a small red shift (from 452 nm in KSF to ∼458 nm in c-KRSF) of the 4 T 2 excitation band. The small redshift may be related to slightly larger distances to the F – ligands in compounds with increasing Rb content, which lowers the crystal field splitting. 28 Formation Mechanism The formation mechanism of h-KRSF is intriguing. The method was found serendipitously: the addition of extra aqueous HF to dissolve the initial precipitate followed by the addition of EtOH was meant to precipitate a random mixed phase Rb/K system to investigate the role of disorder in a more distant cation (K/Rb) coordination sphere on the Mn 4+ luminescence. Interestingly, the absence of a ZPL in c-KRSF shows that deviations from inversion symmetry caused by disorder in the second (K/Rb) coordination sphere are too small to effectively relax the parity selection rule, as was the original goal of the research project. When the EtOH addition did not result in precipitation, the solution was left to evaporate for several days, and a new h-KRSF phase was found. It is interesting to obtain better insight into the formation mechanism of h-KRSF. Therefore, to follow the formation of h-KRSF, emission spectra were recorded during the evaporation process. A 445 nm laser was used to illuminate the reaction beaker (as shown in Section S7 ), while the emission spectra were recorded at regular time intervals over a period of days by a simple fiber-coupled CCD spectrometer. The results are listed in Figure 6 . Immediately after pouring the reaction mixture in EtOH, the solution shows emission spectra typical of the cubic phase with vibronic Stokes and anti-Stokes emission lines, but no ZPL. No precipitation is observed, however, blue excitation showed that much of the c-KRSF immediately concentrated on the lower part of the beaker. The presence of the typical Mn 4+ spectrum without a ZPL in the initially formed clear solution indicates that nanocrystalline c-KRSF is formed, and based on the higher concentration at the bottom of the beaker, the particle size is estimated to be 50–100 nm. The characterization and optical properties of nanocrystalline KRSF (and KSF) deserve further study but are beyond the scope of this work. Stabilizing the KRSF nanocrystals may be interesting for applications where nanocrystalline KSF offers advantages over the conventional microcrystalline material. To follow the transformation from cubic to hexagonal KRSF, the emission spectra as recorded over time are shown in Figure 6 . The formation of h-KRSF is probed by monitoring the intensity of the ZPL. No ZPL is present in the cubic phase, and by integrating the 618–627 nm range, the ZPL intensity is measured by subtracting the background measured in spectra recorded immediately after the addition to EtOH. The integrated ZPL intensity increases over time and shows a peculiar time dependence. There is a delay in the formation, and only after ∼15 h the transformation to h-KRSF starts and a small ZPL appears. The relative intensity of the ZPL increases, first slowly and then rapidly until all c-KRSF is transformed into h-KRSF. The rapidly increasing transformation rate can be well described by exponential growth: when plotted on a logarithmic scale vs time, the ZPL intensity increase is linear. This behavior is typically observed also when reaction conditions are changed. There is always a delay time (induction period) before the formation of h-KRSF starts, and after that the ZPL intensity increases exponentially with time. The reaction conditions were varied by changing the Rb/K ratio and the alcohol used. The minimum fraction of Rb required to form the hexagonal phase is 20% for the synthesis procedure followed, resulting in part hexagonal and part cubic KRSF. For all the different alcohols used (from methanol to butanol), the formation of h-KRSF was observed. The induction period varied and was longer for a lower Rb content ( Section S8 ). Before discussing the formation mechanism, it is good to evaluate the thermodynamic stability of the hexagonal vs cubic phase. To test this, first, temperature-dependent XRD was used. Diffractograms from 17.5 to 24.0° (2θ) were measured. This range was chosen because in this area there are peaks that distinctively belong to either the cubic or hexagonal KRSF. After each measurement, the sample was heated by 10 K, and the next diffractogram was measured. The results between 18 and 19° 2θ are shown in Figure 7 , and the full pattern (17.5–24.0° 2θ) is shown in Section S9 . Upon heating above 500 K, the peaks at 18.85 and 20.1° 2θ (from h-KRSF) diminish and then disappear, while the peaks at 18.4 and 21.2° 2θ (from c-KRSF) increase in intensity and then remain constant above 570 K. After cooling down to RT, the peaks at 18.85 and 20.1° 2θ do not reappear. Add that the pure h-KSF could be more stable and that a search for these materials could reduce thermal instability and sensitivity to moisture. The transformation from hexagonal to cubic indicates that at higher temperatures, the cubic phase is the most stable phase. The observation that the peaks at 18.85 and 20.1° 2θ do not reappear upon cooling shows that the transformation is irreversible. Note that this is different from K 2 MnF 6 for which the cubic phase is not stable at RT. Heating hexagonal K 2 MnF 6 to 440 °C transformed the crystals to the cubic phase but after storing the crystals at room temperature they transformed back to the hexagonal phase. 29 To test whether c-KRSF transforms back to h-KRSF at lower temperatures, several experiments were done: the material was kept for months at 253 K and RT, cubic material was heated for 1 month at 373 K and also heated and then slowly cooled from 573 to 435 K in 90 h. No XRD peaks of h-KRSF were found in any of the diffractograms recorded afterward indicating that c-KRSF is the stable phase around and above RT. The XRD results were confirmed by luminescence measurements, which showed no ZPL at 621.5 nm and only the emission spectra typical of c-KRSF. Based on the observations so far, one can only speculate on the formation mechanism of h-KRSF. Initially, when the aqueous solution is added to the EtOH, nanosized c-KRSF particles are formed. Precipitation at the bottom of the beaker occurs gradually during evaporation. Possibly, the decreased alcohol content destabilizes the nanocrystals and induces particle growth. At the same time, it can destabilize the surface of the nanoparticles. It is known that differences in solvent changes the surface–solvent interaction and can affect the obtained crystal structure of polymorphic materials. 30 , 31 A high surface area may induce the transformation to a structure with a higher density and thus less surface area. 32 This can explain the transformation to h-KSF, as h-KSF has a higher reported density than c-KSF (2.87 vs 2.746 g/cm 3 ). 6 Once particles transform to the hexagonal phase, they can serve as seeds that grow at the expense of dissolving c-KRSF nanoparticles and give rise to an exponential increase of the fraction of h-KRSF over c-KRSF with time. A similar rapid increase in the conversion rate was recently observed by some of us in the transformation of cubic (α-phase) NaYF 4 nanocrystals to larger hexagonal (β-phase) NaYF 4 nanocrystals. 33 We were able to model this transformation by taking into account a distribution in reaction (dissolution/growth) rates for nanoparticles, first resulting in a bimodal size distribution followed by an increasingly rapid transformation to large and monodisperse β-phase crystallites with time, similar to what is observed in Figure 6 b. To obtain better insight into and evidence for a formation mechanism, further studies, such as combined in situ WAXS and SAXS measurements, are required to follow particle size and crystallinity in time and relate these to the time-dependent luminescence properties. Indeed, also other mechanisms have been reported where an induction period is followed by a rapidly increasing transformation rate, for example, the transformation of ferrihydrite to goethite or hematite nanocrystals. 34 Alternatively, autocatalysis can explain exponential growth of the phase transformation rate. This mechanism has been extensively studied, for example, the transformation of α- to β-Sn. 35 A final challenge is the formation of hexagonal KSF free of Rb, especially since the presence of Rb can be linked to a lower stability of the phosphor under the extreme conditions experienced in wLEDs. To lower the amount of Rb, the synthesis of h-KRSF was done with different Rb/K ratios. Lowering the Rb-fraction resulted in longer induction periods and slower formation of h-KRSF. For 40% and 30% Rb, still a complete transformation to h-KRSF was observed. For 20% Rb, there was no complete transformation (for details, see Section S9 ), while for 10% and 0% Rb, no formation of h-K(R)SF could be observed (no increase in ZPL intensity). However, based on earlier reports on the synthesis of h-KSF by Kolditz in 1963 and Gossner in 1904 7 , 8 and the observation of h-KSF in refs. ( 10 ) and ( 11 ), it is evident that h-KSF can be obtained, and it is worthwhile pursuing a synthesis method to realize the synthesis of h-KSF doped with Mn 4+ with superior performance as a wLED phosphor. To understand the role of Rb in the formation of h-KRSF, DFT calculations were done to determine the formation energies of cubic and hexagonal Rb 2 SiF 6 , KRbSiF 6 , and K 2 SiF 6 . The results and a more extensive discussion are provided in Section S10 . In h-KRSF, there are (in contrast with c-KRSF) two nonequivalent M + sites. The calculations show that the lowest energy configuration of h-KRSF is obtained when K + ions occupy the smaller M1 site and Rb + ions occupy the larger M2 site. If the ordering of the monovalent cations during crystal growth is indeed responsible for the formation of the (thermodynamically favorable) hexagonal phase, this could trigger the chain reaction among the other crystals we observe. Single-crystal XRD data could provide further information about the location of the Rb and K ions in the lattice and test this hypothesis. The role of K + and Rb + ordering could also play a role in the phase transition to the cubic phase at high temperature. Disorder induced by M + ion migration can trigger the transformation to the cubic phase, which may be kinetically stable when, even for slow cooling back to room temperature, the ordering of K + on M1 and Rb + on M2 sites is hampered. To quantify how stable the hexagonal phase is compared with the cubic phase, the formation energy of the hexagonal phase was subtracted from the cubic phase. The calculations show that in all cases, the cubic phase is more stable. However, it can be seen that K 2 SiF 6 and Rb 2 SiF 6 have a much stronger preference for the cubic phase as the energy difference between the cubic and the hexagonal phases is 43 and 71 meV per unit cell, respectively, while it is only 9 meV for KRbSiF 6 . This confirms that for the mixed K/Rb composition, it is easier to form the hexagonal phase.
Results and Discussion Phase Identification To investigate the crystal structure and phase purity of the different materials, after synthesis, the dry powders were checked by measuring the X-ray diffractograms. In Figure 1 , the diffractograms of the different microcrystalline powders are shown with their respective references underneath. In Figure 1 , we can see that for all samples, there is good agreement with the reference diffraction patterns. This shows that the different synthesis methods result in phase-pure crystalline materials. For cubic KSF and RSF, the crystal structure is well established, and the reference diffractograms are well known. For c-KRSF, the diffraction lines are at angles in between KSF and RSF, as expected for a solid solution. A good agreement with the experimentally observed positions of diffraction lines was obtained by assuming an increase of 2% in lattice distances compared with the KSF reference. A slight increase is expected by the replacement of K by Rb as the ionic radius of Rb + (1.72 Å) is larger than that of K + (1.64 Å), causing a small expansion of the unit cell. 23 The reference pattern of h-KRSF is based on an earlier work on hexagonal KSF. In ref. ( 9 ), the XRD pattern for h-KSF is reported and used to derive lattice parameters a = 5.67 and c = 9.24 and identify two different sites for the K + ion, a smaller M1 and a larger M2 site. The diffraction pattern obtained here for KRSF is very similar. A good match is obtained for slightly larger lattice parameters a = 5.78 and c = 9.42, providing convincing evidence for the formation of hexagonal KRbSiF 6 :Mn 4+ . The powder XRD data do not allow us to distinguish between the ordering of Rb + and K + on the M1 and M2 sites. It will be interesting to obtain high-quality single crystal data to obtain information on site occupation in the mixed crystal. To evaluate the particle size and particle size distribution, we made SEM images of the final product. The SEM image in Figure 2 shows that the synthesis procedure used results in a homogeneous particle size distribution with an average particle size of ∼30 μm. Optical Properties To study the optical properties of Mn 4+ in the new h-KRSF, both PL and PLE spectra were measured for low-doped samples (0.1–0.5% Mn 4+ ). For comparison, the spectra of Mn 4+ in cubic KRSF, KSF, and RSF were measured as well. In Figure 3 , it is observed that all the PLE spectra have two relatively strong and broad excitation bands around 360 and 460 nm. The 460 nm band shows some sharp lines around 470 nm. These can be ascribed to Xe-lamp lines that are visible in spite of correcting the spectra for variations in the Xe-lamp intensity. A zoom-in for the area between 560 and 625 nm shows a multitude of weak and narrow excitation lines. The PLE spectra of the four samples are very similar to one exception: there is a sharp extra peak at 621.5 nm for Mn 4+ in h-KRSF. In the PL spectra ( Figure 3 c) again, all spectra are very similar showing sharp emission lines at the same positions, with small shifts of ∼0.5 nm to longer wavelengths from KSF to RSF. Again there is one exception: an extra peak at 621.5 nm for Mn 4+ in the hexagonal form of KRSF. Based on the Tanabe–Sugano diagram for 3d 3 ions in octahedral symmetry, the excitation bands at 360 and 460 nm in the PLE spectra are assigned to the 4 A 2 → 4 T 1 and 4 A 2 → 4 T 2 transitions, respectively. In the zoom-in spectra, Figure 3 b, the peaks observed from 560 to 595 nm are assigned to the vibronic lines of the 4 A 2 → 2 T 1 transition and from 600 to 625 nm to vibronic excitation lines of the 4 A 2 → 2 E transition in the cubic modifications. For Mn 4+ in inversion symmetry, all 3d 3 → 3d 3 transitions are parity forbidden, and coupling with odd-parity vibrations is required to partly lift the parity selection rule, resulting in the observation of vibronic excitation and emission lines. In h-KRSF, the Mn 4+ ion is in a site with lower symmetry, and static odd-parity crystal field components allow for breaking the parity selection rule. As a result, also the purely electronic zero-phonon transition can be observed. For the 4 A 2 → 2 E transition in h-KRSF, this zero-phonon line (ZPL) is at 621.5 nm and is identical in the excitation and emission spectra. The positions of the vibronic emission lines in KSF ( Figure 3 c) are 597, 608, 613, 630, 635, and 648 nm, in agreement with earlier reports. The lines at 630, 635, and 648 nm are Stokes vibronic lines due to coupling with ν 6 , ν 4 , and ν 3 vibrations. The lines at 597, 608, and 613 nm are anti-Stokes vibronics at the same energy differences from the ZPL (that is hardly observed, except for h-KRSF) as the Stokes lines. The change in local symmetry around the tetravalent ion in cubic KSF to hexagonal KSF is key for understanding the appearance of the ZPL. In the cubic phase, the Si 4+ atom (or the Mn 4+ ) is symmetrically surrounded by six equidistant fluorine ligands at 1.677 Å. In ref. ( 9 ), a Rietveld refinement on the diffraction pattern of the hexagonal phase of KSF shows that there is a slight distortion of the octahedron: three ligands are at a distance of 1.681 Å while the others are at a distance of 1.688 Å. 9 , 21 A similar deviation from inversion symmetry for Mn 4+ can be expected in h-KRSF and explains why for the hexagonal phase a zero phonon line is observed and not for the cubic phases. Again, it will be interesting to obtain single crystal data to determine the deviation from octahedral coordination for the [MF 6 ] 2– units in the K + /Rb + mixed crystal and compare this with other Mn 4+ fluoride hosts where a ZPL is observed. The enhanced ZPL is beneficial for the performance. The additional emission at ∼620 nm where the eye sensitivity is higher, increases the efficacy. The luminous response function has its maximum at 550 nm and drops to 1% of the maximum at 680 nm. A higher fraction of the emission spectrum toward longer wavelengths reduces the efficacy. If we compare c-KRSF to h-KRSF, a smaller fraction of the emission is from the Stokes emission lines of 630, 636, and 648 nm. The additional emission intensity at ∼620 nm results in an efficacy increase of 2.9% for h-KRSF compared with c-KRSF (see Section S1 and Figure S1 ). In addition, deviation from the inversion symmetry also increases the 4 A 2 → 4 T 2 absorption strength for the blue excitation wavelength at 450 nm as a result of relaxation of the parity selection rule. The increased absorption strength at 450 nm is experimentally observed to be ∼34% by comparing the emission intensity of c-KRSF with h-KRSF under the same excitation intensity (see Section S1 and Figures S1 and S2 ). To evaluate the efficiency of the new h-KRSF, phosphor quantum yield measurements were done. A sample with 1.8 mol % Mn incorporated had an internal quantum yield of 91%. We consider this value to be very high as little effort was put into optimizing the synthesis. For practical applications, a wLED phosphor needs to be resilient to high temperatures and a humid atmosphere. To test the stability, the luminescence of h-KRSF was measured after synthesis, and this was compared to the luminescence after 48 h exposure to 85% humidity at 85 °C. A KSF phosphor was measured simultaneously. For the h-KRSF, a decrease in luminescence of 16% was seen, which is considerably worse than that of the KSF, which showed a loss of 1–2%. The relatively fast degradation of h-KRSF compared with KSF is attributed to the incorporation of Rb. Rb compounds tend to be more hygroscopic than K compounds, thus, enhancing the degradation. 24 For practical application, the stability needs to be improved, e.g., by postsynthesis treatment, overcoating, and/or encapsulation in a protective matrix using strategies that are also explored for KSF. 25 , 26 Reducing the Rb content from 50% to 20% (a Rb concentration for which the hexagonal phase can still be obtained, vide infra ) may also enhance the stability. Furthermore, optimization is required to explore the potential of h-KRSF as a new LED phosphor. An initial test with h-KRSF phosphor in a w-LED shows promising results with a performance that is similar to that of a wLED with KSF (see Section S2 ). Concentration-Dependent Luminescence The 450 nm absorption by Mn 4+ in the 4 A 2 → 4 T 1 absorption band involves a spin-allowed, but parity-forbidden transition. As discussed above, the deviation from inversion symmetry in h-KRSF is expected to make the absorption stronger than that in c-KRSF or KSF, but this absorption is still much weaker than for fully allowed transitions such as the 4f n → 4f n –1 5d transition in Ce 3+ or Eu 2+ . A high Mn 4+ concentration is, thus, beneficial for reducing the amount of phosphor required to absorb sufficient blue LED light in a wLED. At the same time, a high dopant concentration can lead to concentration quenching. Energy transfer between neighboring ions will cause migration of the excitation energy over the dopant sublattice. Especially above the percolation point (where a 3D connected lattice of dopant ions is realized), the migrating excitation energy can probe a large volume in which there is a high probability to encounter a defect or impurity quenching site causing concentration quenching. Investigating the concentration dependence of the luminescence efficiency is therefore important, and a concentration series of h-KRSF:Mn 4+ x % ( x = 0.1–10) was synthesized (see Section S3 for the XRD patterns). It is important to realize that the fraction of Mn 4+ in the synthesis mixture is not the same as the fraction incorporated in the h-KRSF. Indeed, after evaporating the EtOH out of the reaction mixture, darker colored spots are visible within the dry powder. Washing with H 2 O 2 removes these spots. Probably these spots were compounds with a high concentration of Mn that dissolve in H 2 O 2 . 27 This also means that the fraction of Mn 4+ incorporated in h-KRSF is lower than the nominal concentration. To check the actual Mn concentration, inductively coupled plasma optical emission spectroscopy (ICP-OES) measurements were done. The measurements show that 16–60% of the added Mn is actually incorporated (see Section S4 ). The concentrations mentioned below always refer to actual concentrations in the phosphors, as determined with ICP-OES. To study the concentration-dependent optical properties, both emission spectra and luminescence decay curves were measured for samples with Mn 4+ concentrations varying between 0.1 and 10 mol %. In Figure 4 a, the emission spectra of samples with different Mn concentrations are shown under 450 nm excitation. The samples were diluted 10× (wt %) with optically inactive BaSO 4 to limit the path length of light through the h-KRSF phosphor and reduce saturation effects in blue light absorption. It can be seen that the intensity increases with an increasing Mn concentration. The integrated intensities as a function of Mn 4+ concentration ( Figure 4 b) show a rapid increase at low concentrations (up to 1% Mn 4+ ), after which it levels off. This nonlinear increase at high dopant concentrations has been observed before and is explained by saturation of blue light absorption. The integrated emission intensities of the undiluted phosphors show an even stronger leveling off with increasing Mn 4+ concentration ( Section S5 ). As the Mn 4+ concentration increases, a substantial part of the blue light is absorbed, and the fraction of absorbed light no longer increases linearly with Mn 4+ concentration, as is also evident from Lambert–Beers’ law. Only for a low value of ε cl (molar extinction coefficient × concentration × path length), the fraction of absorbed light increases linearly with concentration. This makes it difficult to determine if concentration quenching occurs based on concentration-dependent emission intensities. A better method to study concentration quenching is by measuring luminescence lifetimes. In the case of nonradiative loss processes as a result of concentration quenching, the emission lifetime will decrease. Luminescence decay curves of the 630 nm emission after pulsed 450 nm excitation are shown in Figure 4 d. A single exponential decay is observed for all concentrations, and the decay times are constant ∼6.2 ms. The single exponential decay curves and constant decay time indicate that no concentration quenching occurs up to at least 10% Mn 4+ . Temperature-Dependent Luminescence The temperature stability of the luminescence is an important aspect of wLED phosphors. Heat is generated by the LED chip and also by heat dissipation inherent to the conversion of a higher energy blue photon to green or red photons. The local temperature of a phosphor in wLEDs can easily reach 150 °C. The thermal quenching behavior is therefore crucial. Indeed, previously Mn 4+ -doped fluorides have been found where the lower local symmetry also resulted in the desired observation of a strong ZPL and shorter emission lifetime, but the poor thermal quenching behavior made these phosphors unfit for application in wLEDs. 3 , 5 , 28 The thermal quenching behavior of h-KRSF:0.1% Mn 4+ was, therefore, measured and compared with those of cubic KRSF:0.1% Mn 4+ and KSF:0.5% Mn 4+ . The temperature dependence of the integrated emission intensities in the relevant high temperature region 373–700 K is shown in Figure 5 a. The corresponding emission spectra at different temperatures of the three samples are shown in Section S6 . When the temperature increases, the emission intensity remains constant until 450 K, above which it starts to decrease. Measuring emission intensity as a function of temperature to probe thermal quenching can be complicated by intensity variations not related to thermal quenching, for example, when the oscillator strength of the absorption transition is temperature dependent. In addition, practical aspects, such as changes in alignment, collection efficiency, or excitation source intensity, can give rise to intensity variations not related to thermal quenching. A fast and reliable method to determine the thermal quenching temperature is to measure the emission lifetime as a function of temperature. As nonradiative decay sets in, the emission lifetime shortens because the lifetime is the inverse of the sum of radiative and nonradiative decay rates. Therefore, lifetimes were also measured as a function of temperature for h-KRSF:0.1% Mn 4+ , c-KRSF:0.1% Mn 4+ , and KSF:0.5%: Mn 4+ and are shown in Figure 5 b. All the decay curves are single exponential. The lifetimes of the Mn 4+ emission in the three different host lattices are shown as a function of temperature in Figure 5 c. For all three host matrices, it can be seen that the lifetime decreases slowly up until 450–480 K after which the lifetime drops sharply, consistent with the temperature-dependent intensity measurements. Before discussing the luminescence quenching temperature, it is interesting to discuss differences in lifetimes for Mn 4+ emission in the three compounds: the lifetime is longer for KSF and c-KRSF than that for h-KRSF. As discussed above, the perfect octahedral coordination in the two cubic lattices imposes a strict parity selection rule. This does not only prevent the observation of a ZPL but also reduces the overall transition probability as the ZPL transition is forbidden. The room temperature emission lifetime is ∼6 ms for Mn 4+ in h-KRSF vs. ∼8 ms in the cubic lattices. The shorter lifetime in h-KRSF is beneficial for application in wLEDs. As mentioned in Introduction , the long emission lifetime is a limiting factor in the total light output and prevents the application of KSF in high-brightness wLEDs. The 25% shorter lifetime helps to improve the performance of h-KRSF in higher brightness sources although the lifetime is still long compared to that for emission in other wLED phosphors, relying on d–f emission from Ce 3+ (∼40–80 ns) or Eu 2+ (∼1–2 μs). In Figure 5 a, it is observed that the luminescence intensity is constant until 450 K, while the lifetime decreases gradually with the temperature between 100 and 400 K ( Figure 5 c). This is an indication that the change in emission lifetime is not caused by temperature quenching. This is generally observed for the 2 E emission of Mn 4+ and explained by an increase in vibronic transition probabilities induced by a higher phonon occupation number n . It is well-established that the transition probability for Stokes vibronics scales with ( n + 1) and anti-Stokes vibronics with n . 28 The corresponding change in radiative lifetime as a function of temperature is described by Here, τ r ( T ) is the radiative lifetime at temperature T (in K), h ν is the effective phonon energy, and k b is the Boltzmann constant. This equation describes the emission lifetime before temperature quenching sets in at 450 K. Temperature quenching for Mn 4+ has been shown to occur via the 4 T 2 state with an activation energy Δ E . Together with temperature dependence for the radiative decay time τ r ( T ) from eq 1 , the expression for the lifetime as a function of temperature is where τ nr is the nonradiative decay time, which is typically in the order of picoseconds, the time scale of vibrations. We can now use eqs 1 and 2 to find the quenching temperature T 50 , defined as the temperature at which τ( T ) = (1/2)τ r ( T ) . The T 50 temperatures determined in this way for KSF, c-KRSF, and h-KRSF were found to be 530, 510, and 503 K, respectively. All temperatures are sufficiently high to prevent thermal quenching in wLEDs. There is a small decrease in T 50 from KSF to KRSF. The thermal luminescence quenching mechanism has been shown to occur by thermal crossover from the 2 E state to the 4 A 2 state via the 4 T 2 state. The lower the energy of the 4 T 2 state, the lower the quenching temperature will be. The slightly lower T 50 values for KRSF are consistent with a small red shift (from 452 nm in KSF to ∼458 nm in c-KRSF) of the 4 T 2 excitation band. The small redshift may be related to slightly larger distances to the F – ligands in compounds with increasing Rb content, which lowers the crystal field splitting. 28 Formation Mechanism The formation mechanism of h-KRSF is intriguing. The method was found serendipitously: the addition of extra aqueous HF to dissolve the initial precipitate followed by the addition of EtOH was meant to precipitate a random mixed phase Rb/K system to investigate the role of disorder in a more distant cation (K/Rb) coordination sphere on the Mn 4+ luminescence. Interestingly, the absence of a ZPL in c-KRSF shows that deviations from inversion symmetry caused by disorder in the second (K/Rb) coordination sphere are too small to effectively relax the parity selection rule, as was the original goal of the research project. When the EtOH addition did not result in precipitation, the solution was left to evaporate for several days, and a new h-KRSF phase was found. It is interesting to obtain better insight into the formation mechanism of h-KRSF. Therefore, to follow the formation of h-KRSF, emission spectra were recorded during the evaporation process. A 445 nm laser was used to illuminate the reaction beaker (as shown in Section S7 ), while the emission spectra were recorded at regular time intervals over a period of days by a simple fiber-coupled CCD spectrometer. The results are listed in Figure 6 . Immediately after pouring the reaction mixture in EtOH, the solution shows emission spectra typical of the cubic phase with vibronic Stokes and anti-Stokes emission lines, but no ZPL. No precipitation is observed, however, blue excitation showed that much of the c-KRSF immediately concentrated on the lower part of the beaker. The presence of the typical Mn 4+ spectrum without a ZPL in the initially formed clear solution indicates that nanocrystalline c-KRSF is formed, and based on the higher concentration at the bottom of the beaker, the particle size is estimated to be 50–100 nm. The characterization and optical properties of nanocrystalline KRSF (and KSF) deserve further study but are beyond the scope of this work. Stabilizing the KRSF nanocrystals may be interesting for applications where nanocrystalline KSF offers advantages over the conventional microcrystalline material. To follow the transformation from cubic to hexagonal KRSF, the emission spectra as recorded over time are shown in Figure 6 . The formation of h-KRSF is probed by monitoring the intensity of the ZPL. No ZPL is present in the cubic phase, and by integrating the 618–627 nm range, the ZPL intensity is measured by subtracting the background measured in spectra recorded immediately after the addition to EtOH. The integrated ZPL intensity increases over time and shows a peculiar time dependence. There is a delay in the formation, and only after ∼15 h the transformation to h-KRSF starts and a small ZPL appears. The relative intensity of the ZPL increases, first slowly and then rapidly until all c-KRSF is transformed into h-KRSF. The rapidly increasing transformation rate can be well described by exponential growth: when plotted on a logarithmic scale vs time, the ZPL intensity increase is linear. This behavior is typically observed also when reaction conditions are changed. There is always a delay time (induction period) before the formation of h-KRSF starts, and after that the ZPL intensity increases exponentially with time. The reaction conditions were varied by changing the Rb/K ratio and the alcohol used. The minimum fraction of Rb required to form the hexagonal phase is 20% for the synthesis procedure followed, resulting in part hexagonal and part cubic KRSF. For all the different alcohols used (from methanol to butanol), the formation of h-KRSF was observed. The induction period varied and was longer for a lower Rb content ( Section S8 ). Before discussing the formation mechanism, it is good to evaluate the thermodynamic stability of the hexagonal vs cubic phase. To test this, first, temperature-dependent XRD was used. Diffractograms from 17.5 to 24.0° (2θ) were measured. This range was chosen because in this area there are peaks that distinctively belong to either the cubic or hexagonal KRSF. After each measurement, the sample was heated by 10 K, and the next diffractogram was measured. The results between 18 and 19° 2θ are shown in Figure 7 , and the full pattern (17.5–24.0° 2θ) is shown in Section S9 . Upon heating above 500 K, the peaks at 18.85 and 20.1° 2θ (from h-KRSF) diminish and then disappear, while the peaks at 18.4 and 21.2° 2θ (from c-KRSF) increase in intensity and then remain constant above 570 K. After cooling down to RT, the peaks at 18.85 and 20.1° 2θ do not reappear. Add that the pure h-KSF could be more stable and that a search for these materials could reduce thermal instability and sensitivity to moisture. The transformation from hexagonal to cubic indicates that at higher temperatures, the cubic phase is the most stable phase. The observation that the peaks at 18.85 and 20.1° 2θ do not reappear upon cooling shows that the transformation is irreversible. Note that this is different from K 2 MnF 6 for which the cubic phase is not stable at RT. Heating hexagonal K 2 MnF 6 to 440 °C transformed the crystals to the cubic phase but after storing the crystals at room temperature they transformed back to the hexagonal phase. 29 To test whether c-KRSF transforms back to h-KRSF at lower temperatures, several experiments were done: the material was kept for months at 253 K and RT, cubic material was heated for 1 month at 373 K and also heated and then slowly cooled from 573 to 435 K in 90 h. No XRD peaks of h-KRSF were found in any of the diffractograms recorded afterward indicating that c-KRSF is the stable phase around and above RT. The XRD results were confirmed by luminescence measurements, which showed no ZPL at 621.5 nm and only the emission spectra typical of c-KRSF. Based on the observations so far, one can only speculate on the formation mechanism of h-KRSF. Initially, when the aqueous solution is added to the EtOH, nanosized c-KRSF particles are formed. Precipitation at the bottom of the beaker occurs gradually during evaporation. Possibly, the decreased alcohol content destabilizes the nanocrystals and induces particle growth. At the same time, it can destabilize the surface of the nanoparticles. It is known that differences in solvent changes the surface–solvent interaction and can affect the obtained crystal structure of polymorphic materials. 30 , 31 A high surface area may induce the transformation to a structure with a higher density and thus less surface area. 32 This can explain the transformation to h-KSF, as h-KSF has a higher reported density than c-KSF (2.87 vs 2.746 g/cm 3 ). 6 Once particles transform to the hexagonal phase, they can serve as seeds that grow at the expense of dissolving c-KRSF nanoparticles and give rise to an exponential increase of the fraction of h-KRSF over c-KRSF with time. A similar rapid increase in the conversion rate was recently observed by some of us in the transformation of cubic (α-phase) NaYF 4 nanocrystals to larger hexagonal (β-phase) NaYF 4 nanocrystals. 33 We were able to model this transformation by taking into account a distribution in reaction (dissolution/growth) rates for nanoparticles, first resulting in a bimodal size distribution followed by an increasingly rapid transformation to large and monodisperse β-phase crystallites with time, similar to what is observed in Figure 6 b. To obtain better insight into and evidence for a formation mechanism, further studies, such as combined in situ WAXS and SAXS measurements, are required to follow particle size and crystallinity in time and relate these to the time-dependent luminescence properties. Indeed, also other mechanisms have been reported where an induction period is followed by a rapidly increasing transformation rate, for example, the transformation of ferrihydrite to goethite or hematite nanocrystals. 34 Alternatively, autocatalysis can explain exponential growth of the phase transformation rate. This mechanism has been extensively studied, for example, the transformation of α- to β-Sn. 35 A final challenge is the formation of hexagonal KSF free of Rb, especially since the presence of Rb can be linked to a lower stability of the phosphor under the extreme conditions experienced in wLEDs. To lower the amount of Rb, the synthesis of h-KRSF was done with different Rb/K ratios. Lowering the Rb-fraction resulted in longer induction periods and slower formation of h-KRSF. For 40% and 30% Rb, still a complete transformation to h-KRSF was observed. For 20% Rb, there was no complete transformation (for details, see Section S9 ), while for 10% and 0% Rb, no formation of h-K(R)SF could be observed (no increase in ZPL intensity). However, based on earlier reports on the synthesis of h-KSF by Kolditz in 1963 and Gossner in 1904 7 , 8 and the observation of h-KSF in refs. ( 10 ) and ( 11 ), it is evident that h-KSF can be obtained, and it is worthwhile pursuing a synthesis method to realize the synthesis of h-KSF doped with Mn 4+ with superior performance as a wLED phosphor. To understand the role of Rb in the formation of h-KRSF, DFT calculations were done to determine the formation energies of cubic and hexagonal Rb 2 SiF 6 , KRbSiF 6 , and K 2 SiF 6 . The results and a more extensive discussion are provided in Section S10 . In h-KRSF, there are (in contrast with c-KRSF) two nonequivalent M + sites. The calculations show that the lowest energy configuration of h-KRSF is obtained when K + ions occupy the smaller M1 site and Rb + ions occupy the larger M2 site. If the ordering of the monovalent cations during crystal growth is indeed responsible for the formation of the (thermodynamically favorable) hexagonal phase, this could trigger the chain reaction among the other crystals we observe. Single-crystal XRD data could provide further information about the location of the Rb and K ions in the lattice and test this hypothesis. The role of K + and Rb + ordering could also play a role in the phase transition to the cubic phase at high temperature. Disorder induced by M + ion migration can trigger the transformation to the cubic phase, which may be kinetically stable when, even for slow cooling back to room temperature, the ordering of K + on M1 and Rb + on M2 sites is hampered. To quantify how stable the hexagonal phase is compared with the cubic phase, the formation energy of the hexagonal phase was subtracted from the cubic phase. The calculations show that in all cases, the cubic phase is more stable. However, it can be seen that K 2 SiF 6 and Rb 2 SiF 6 have a much stronger preference for the cubic phase as the energy difference between the cubic and the hexagonal phases is 43 and 71 meV per unit cell, respectively, while it is only 9 meV for KRbSiF 6 . This confirms that for the mixed K/Rb composition, it is easier to form the hexagonal phase.
Conclusion The luminescence of Mn 4+ in a new hexagonal phase of KRbSiF 6 is reported. The optical properties have clear advantages over those for Mn 4+ in the cubic KSF. The deviation from the inversion symmetry allows for the observation of a strong zero-phonon line and shortens the luminescence lifetime for Mn 4+ . This improves the lumen/W efficacy, increases the absorption strength, and reduces saturation at high blue photon fluxes. The quenching temperature of the Mn 4+ luminescence in the hexagonal phase is very high and comparable to that in the cubic phase (>500 K). High quantum yields (>90%) are realized without synthesis optimization but the stability is lower, probably due to the large fraction of Rb. The h-KRSF is synthesized by adding precursors dissolved in water to an excess volume of ethanol followed by slow evaporation of the ethanol. The formation mechanism is intriguing and was studied by continuously measuring luminescence spectra of the (nano)particles in the reaction volume. After an induction period of ∼15 h, the precipitate started to transform to the hexagonal phase with an exponentially increasing transformation rate. After 8 h, it was fully transformed. The stability of the hexagonal phase was tested by temperature-dependent XRD and luminescence measurements showed that above 200°C, h-KRSF transforms back to c-KRSF. The higher efficacy, shorter luminescence lifetime, and high quenching temperature make the hexagonal phase superior to cubic KSF, especially if a Rb-free synthesis route for pure h-KSF can be found to match the stability of c-KSF.
The efficient red-emitting phosphor K 2 SiF 6 :Mn 4+ (KSF) is widely used for low-power LED applications. The saturated red color and sharp line emission are ideal for application in backlight LEDs for displays. However, the long excited state lifetime lowers the external quantum yield (EQY) at high photon flux, limiting the application in (higher power density) lighting. Here, we report the synthesis of a new crystalline phase: hexagonal (K,Rb)SiF 6 :Mn 4+ (h-KRSF). Due to the lower local symmetry, the Mn 4+ emission in this new host material shows a pronounced zero phonon line, which is different from Mn 4+ in the cubic KSF. The lower symmetry reduces the excited state lifetime, and thus, the loss of EQY under high photon fluxes, and the spectral change also increases the lumen/W output. Temperature-dependent emission and lifetime measurements reveal a high luminescence quenching temperature of ∼500 K, similar to that of KSF. The formation mechanism of h-KRSF was studied in situ by measuring the emission spectra of the precipitate in solution over time. Initially, nanocrystalline cubic KRSF (c-KRSF) is formed, which transforms into a microcrystalline hexagonal precipitate with a surprising exponential increase in the transformation rate with time. The stability of the new phase was studied by temperature-dependent XRD, and an irreversible transition back to the cubic phase was seen upon heating to temperatures above 200 ° C.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c13715 . Luminous efficacy calculations, incorporation in wLED, XRD patterns as a function of [Mn 4+ ], ICP-OES, temperature-dependent emission and XRD, in situ emission setup and ab initio DFT calculations ( PDF ) Supplementary Material Author Contributions § A.J.v.B. and J.W.d.W. contributed equally. The authors declare no competing financial interest. Acknowledgments Financial support from Nicwhia Corporation (Japan) is gratefully acknowledged. J.W. and A.M. acknowledge financial support from the project CHEMIE.PGT.2019.004 of TKI/Topsector Chemie, which is partly financed by The Netherlands Organisation for Scientific Research (NWO).
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 18; 16(1):1044-1053
oa_package/45/1f/PMC10788833.tar.gz
PMC10788834
38134036
Introduction Micro- and nanomotors have emerged as cutting-edge technologies in materials science. These miniature devices are capable of navigating through complex environments and performing specific tasks at small scales, promising a wide range of applications and advancements in various fields, such as targeted drug delivery and therapy, 1 − 4 environmental monitoring and remediation, 5 − 7 biosensing, 8 and object manipulation. 9 , 10 The propulsion mechanisms of micro- and nanomotors are diverse, ranging from chemical reactions, acoustic waves, and magnetic fields to electric fields and light. 11 − 15 Among them, light-driven micro- and nanomotors have attracted considerable attention. 16 − 18 Light-powered micro- and nanomotors convert light energy into mechanical motion through various mechanisms involving light-absorbing materials on the motor’s surface. When exposed to light, these materials generate a product gradient (neutral or charged) or localized heating, resulting in a concentration, electric potential, or thermal gradient that propels motors through the surrounding fluid by self-diffusiophoresis, self-electrophoresis, or self-thermophoresis. 19 Common choices for photoactive materials include photocatalytic semiconductor micro- and nanoparticles, such as the UV light-activated TiO 2 and ZnO, 20 , 21 or visible light-activated α-Fe 2 O 3 and Cu 2 O, prepared by low-cost chemical syntheses. 22 , 23 In most cases, these semiconductors are highly symmetric or are limited by the recombination of photogenerated charge carriers. To solve these problems and unlock the self-propulsion ability, thin layers of noble metal catalysts, like Pt, Au, and Ag, are usually deposited through physical vapor deposition methods on the semiconductors’ surface to construct “two-faced” Janus structures, breaking the micro- and nanoparticles’ symmetry and improving charge carrier separation. 24 Despite the numerous advantages of light-driven metal–semiconductor Janus micro- and nanomotors, several challenges remain. A significant focus has been placed on increasing the velocity of these motors, as it would enable them to accelerate mass transfer-limited chemical reactions or physical processes. One of the initial strategies involved determining the optimal metal to pair with a fixed semiconductor, aiming to enhance the velocity of the micromotor. It was revealed that Pt is the best choice for TiO 2 micromotors regardless of potential secondary pollution caused by Pt corrosion in water. At the same time, Au has been identified as the optimal choice for BiOI micromotors. 25 , 26 Additional studies have shown that using bimetallic coatings or depositing metal layers successively can improve the velocity of photocatalytic micromotors. 27 , 28 In addition, increasing metal deposition time, and so metal layer thickness and compactness, has proved to speed up micromotors’ self-propulsion. 29 This higher velocity is often linked to the larger electrochemical potential difference between the metal and semiconductor. However, an important aspect that remains relatively unexplored is the influence of the electronic properties of the metal–semiconductor junction on the velocity of light-powered micro- and nanomotors. For instance, the interaction between a metal and a semiconductor is known to dramatically alter the behavior of the junction, inducing a semiconductor energy band bending which controls the flow of current from and to the metal. 30 Understanding and thoroughly investigating this aspect could provide valuable insights into further optimizing the performance of these motors. MXenes represent a class of 2D materials with the general formula M n +1 X n T x ( n = 1, 2, 3), where M stands for early transition metals like Ti, Mo, or V, X represents C and/or N, and T x denotes the surface-terminating functionality, which can be −O, −F, or −OH. 31 , 32 Due to their multilayered structure, resembling an accordion, and high surface area, exfoliated MXenes have gathered significant interest in fabricating innovative and versatile light-driven micromotors. A recent study demonstrated that Pt–Ti 3 C 2 nanoflakes, derived by ultrasonication-induced delamination of exfoliated MXene microparticles followed by Pt layer deposition, autonomously moved under UV light irradiation in pure water because of the spontaneous formation of superficial TiO 2 . 33 Another investigation aimed at converting Ti 3 C 2 T x MXene microparticles into photocatalytic TiO 2 through thermal annealing processes that preserved the characteristic multilayered structure of MXenes. After Pt layer deposition and surface decoration with magnetic γ-Fe 2 O 3 nanoparticles, the resulting micromotors showed self-propulsion in the 3D space under UV light irradiation in pure water thanks to a powerful driving force in the upward direction, characterized by high velocities up to 16 μm s –1 . 34 Similarly, another report introduced micromotors based on V 2 C MXene microparticles, Bi nanoparticles, serving as cocatalysts, and magnetic γ-Fe 2 O 3 nanoparticles. Under visible light irradiation, these micromotors exhibited velocities ranging from approximately 1 μm s –1 in pure water to 3 μm s –1 in the presence of a high concentration of 5 wt % H 2 O 2 fuel. 35 Still, the reliance on Pt or toxic H 2 O 2 restricts the concrete applicability of these micromachines. The present study investigates the fabrication, characterization, and light-driven self-propulsion of metal–semiconductor micromotors with different types of metal materials and TiO 2 as the semiconducting material, aiming to get more insights into how the electronic properties of the metal–semiconductor junction affect their motion behaviors and velocities ( Scheme 1 ). The fabrication of the micromotors involves the thermal annealing process of exfoliated Ti 3 C 2 T x MXene microparticles to produce multilayered TiO 2 microparticles, followed by the asymmetric deposition of Au or Ag layers by the sputtering technique to obtain Janus structures. As confirmed by numerical simulations, Au and Ag’s depositions lead to different bending of TiO 2 energy bands, forming Schottky contacts characterized by intense electric fields at the metal–semiconductor interface. In pure water, the Schottky junction effect is more significant, with Au–TiO 2 micromotors showing higher self-propulsion velocities than Ag–TiO 2 micromotors under UV light irradiation due to the stronger built-in electric field, which efficiently separated photogenerated electron–hole pairs within the semiconductor. This phenomenon also results in hole accumulation beneath the metal surface, favoring the self-electrophoretic mechanism. On the opposite, the introduction of a small amount of 0.1 wt % H 2 O 2 fuel completely changes the dynamics. The superior catalytic properties of Ag in decomposing H 2 O 2 give rise to a large product concentration gradient, allowing Ag–TiO 2 micromotors to overcome Brownian motion and achieve active motion by a self-diffusiophoretic mechanism even in the absence of light. Under UV light irradiation, Ag–TiO 2 micromotors present a 2-fold increase in velocity than Au–TiO 2 micromotors due to the synergy between Ag catalytic activity and self-electrophoresis. Finally, the Au–TiO 2 micromotors were applied in water purification, demonstrating the ability to break down polyethylene glycol (PEG) chains under UV light irradiation in both pure water and H 2 O 2 . These findings shed light on the interplay between electronic properties and catalytic activity in metal–semiconductor junctions, providing valuable insights for designing more powerful and efficient light-driven micro- and nanomotors in the future and promising implications for water treatment and photocatalysis fields.
Results and Discussion Modeling Metal–TiO 2 Junctions Before presenting the results of the fabrication, characterization, and light-powered motion analysis of MXene-derived metal–TiO 2 Janus micromotors with different types of metal layers, it is essential to provide an introduction explaining the significance of diverse metal–semiconductor junctions in the field of self-propelled micro- and nanomotors. The light-driven self-propulsion of semiconductor-based micro- and nanomotors relies on irradiating a photocatalytic semiconductor micro- or nanoparticle with photons of higher energy than the semiconductor’s energy bandgap. The absorption of these photons promotes electrons from the semiconductor’s valence band to the conduction band, leaving holes in the valence band. Then, for the motor to move, photogenerated charge carriers must migrate to the semiconductor’s surface to react with water. However, the creation of photogenerated electron–hole pairs occurs rapidly, on the order of picoseconds, while their migration to the semiconductor’s surface occurs on a longer time scale, ranging from nanoseconds to microseconds. 36 Consequently, there is a high recombination probability for charge carriers, resulting in no movement of the semiconductor micro- or nanoparticles under light irradiation. To address this issue, the asymmetric deposition of a metal layer on the semiconductor’s surface can be employed, designing a metal–semiconductor Janus micro- or nanomotor. Then, the most common self-propulsion mechanism involves the reaction between photogenerated holes left in the semiconductor’s valence band, migrated to the surface, and photogenerated electrons transferred to the metal. These charges cause the oxidation and reduction of water according to the following reactions. 37 where TiO 2 was assumed as the photocatalytic semiconductor material. As a result, the semiconductor side of the micro- or nanomotor acts as a source of protons (H + ), while the metal side acts as a sink for H + . This behavior establishes an H + concentration gradient, which leads to a local electric field around the charged micro- or nanomotor, ultimately inducing its motion by self-electrophoresis. The work functions of the metal (Φ m ) and semiconductor (Φ s ), which represent the minimum energy required to liberate an electron from the materials, play a crucial role in determining the electronic properties of the junction. When Φ m < Φ s , an Ohmic contact is obtained, where charge carriers can flow from the metal to the semiconductor and vice versa with low resistance. 30 On the contrary, when Φ m > Φ s , a Schottky contact is obtained, characterized by a rectifying behavior due to the realization of a potential barrier at the interface with the metal governing charge carrier flow. As a consequence, in a Schottky contact, a built-in electric field is generated, which enhances the separation of photogenerated electron–hole pairs in the semiconductor, thereby avoiding the recombination process. Numerical simulations were performed by the Semiconductor Module of COMSOL Multiphysics software to model the energy band banding in semiconductor microparticles after the deposition of layers of different metals. Anatase TiO 2 , an n - type semiconductor where electrons are the majority carriers and holes are the minority carriers, was selected as the semiconducting material due to its frequent utilization in the fabrication of light-powered micro- and nanomotors. The chosen metals for this investigation were Au and Ag due to their different work functions (5.47 eV for Au, 4.64 eV for Ag) and their environmental compatibility. 38 Anatase TiO 2 relative dielectric permittivity (85), 39 work function (4.40 eV), 40 energy bandgap (3.20 eV), 41 and the mobility 42 and effective mass 43 of charge carriers were required for the numerical simulations. No effects related to surface defects or temperature dependence were considered in this model. Nanoscale metal–semiconductor junctions were constructed by placing a single Au or Ag nanoparticle (20 nm in diameter), for simplicity, on the surface of a TiO 2 microparticle, as illustrated in Figure S1 . Au–TiO 2 and Ag–TiO 2 contacts behave as Schottky junctions because of the higher work functions of the metals than TiO 2 . In particular, significant upward bending of TiO 2 conduction and valence bands forms. Consequently, a potential barrier arises qV B = q Φ m – q χ for electrons entering TiO 2 from the metal side, where χ is the semiconductor electron affinity and q is the charge. At the same time, the metal–TiO 2 interface is depleted of electrons and enriched in holes. Electrons in the TiO 2 conduction band experience a relatively small potential barrier of qV bi = q Φ m – q Φ s to transfer to the metal. The energy band diagrams and the simulated maps of the TiO 2 conduction band minimum (CBM) energy as a function of depth and distance from the metal nanoparticle’s center of Au–TiO 2 and Ag–TiO 2 Schottky junctions are compared in Figure 1 . The upward bending of the CBM almost approached 1.1 V beneath the Au nanoparticle center and extended 20–30 nm within TiO 2 , while for the Ag nanoparticle, a lower bending was computed (0.2 eV, approximately). This energy band bending generates an electric field under the metal nanoparticles, pointing toward the metals. Figure 1 also reports the simulated 2D maps of the built-in electric field at the TiO 2 surface below the metal nanoparticles. A 3-fold enhancement in the electric field intensity was calculated for the Au–TiO 2 Schottky junction (10 × 10 7 V m –1 ) compared to Ag–TiO 2 (3 × 10 7 V m –1 ). Since the electric field is proportional to the spatial derivative of the CBM energy, the highest intensity of the electric fields was found close to the metal nanoparticles’ edges, resulting in distinct “halo” shapes. Such strong and localized built-in electric fields favor the separation of photogenerated electron–hole pairs, avoiding unwanted recombination phenomena and efficiently utilizing absorbed light. It is reasonable to ask if the built-in electric field intensity significantly influences the motion of light-driven micro- and nanomotors. If so, careful band engineering of metal–semiconductor junctions promises to be the key to achieving powerful light-powered propulsive forces. Fabrication and Characterization of MXene-Derived Metal–TiO 2 Micromotors The interaction between metal and semiconductor in metal–semiconductor Schottky junctions gives rise to interesting phenomena that control the photogenerated charge carriers’ recombination and flow at the interface. The type and strength of these effects depend on the specific combination of the metal and semiconductor materials used. However, their impact on the light-powered motion of metal–semiconductor-based micro- and nanomotors, if any, has not been exhaustively elucidated yet. In this study, the light-driven self-propulsion of micromotors based on a semiconductor in contact with different metals has been investigated. The semiconductor material chosen for this examination is a Ti 3 C 2 T x MXene-derived TiO 2 , combining the high photocatalytic activity of TiO 2 under UV light irradiation with the accordion-like multilayered structure of exfoliated MXenes, which is highly desirable for practical applications. As for the metals, Au and Ag were selected as the metal materials to realize metal–semiconductor interfaces such as those modeled in Figure 1 and for their biocompatibility compared to Pt, whose corrosion in water potentially causes harm to the environment and health. The different fabrication steps of MXene-derived metal–semiconductor micromotors are illustrated in Figure 2 a. Exfoliated Ti 3 C 2 T x microparticles were thermally annealed at 550 °C for 0 min, i.e., an annealing process where the temperature rump-up is immediately followed by the ramp-down, in synthetic air to induce the oxidation of the Ti 3 C 2 T x into TiO 2 . Despite the oxidation of Ti 3 C 2 T x to TiO 2 starting at a lower temperature, previous studies suggested that the thermal annealing process at 550 °C for 0 min results in the optimal morphological, structural, electrochemical, and photocatalytic properties of the obtained TiO 2 . 34 , 44 To enable light-powered motion, it was essential to break the symmetry of the semiconductor microparticle. To achieve this, MXene-derived TiO 2 microparticles were positioned in front of the Au and Ag targets of a sputter coater. This setup allowed for the asymmetric deposition of thin metal layers with a nominal thickness of about 80 nm on the MXene-derived TiO 2 microparticles’ surface. The scanning electron microscopy (SEM) image shown in Figure 2 b provides a closer look at the surface morphology of an MXene-derived TiO 2 microparticle. In comparison to the pristine MXene ( Figure S2 ), the previously smooth surface of Ti 3 C 2 T x multilayer stacks was transformed into aggregated TiO 2 nanoparticles. Despite this transformation, the characteristic multilayered structure of the Ti 3 C 2 T x MXene was maintained due to the rapid thermal annealing process. This preservation of the multilayered structure is especially promising for those applications where contact with the photocatalyst plays a crucial role, such as water purification. The successful conversion of Ti 3 C 2 T x into anatase TiO 2 was convincingly demonstrated through Raman spectroscopy. In Figure 2 c, the Raman spectra of both Ti 3 C 2 T x MXene and MXene-derived TiO 2 microparticles are compared. For Ti 3 C 2 T x MXene microparticles, three distinctive bands were observed in the Raman spectrum: the band at 152 cm –1 was attributed to in-plane Ti–C vibrations in Ti 3 C 2 with E g symmetry, the band at 404 cm –1 was ascribed to the in-plane vibrations of the O atoms in OH-terminated MXene (Ti 3 C 2 (OH) 2 ) with E g symmetry, and the band at 619 cm –1 corresponded to out-of-plane Ti–C vibrations in Ti 3 C 2 with A 1g symmetry, in agreement with a previous study. 45 On the contrary, the Raman spectrum of MXene-derived TiO 2 microparticles exhibited the characteristic bands of anatase TiO 2 , including E g symmetry band at 144 cm –1 , B 1g symmetry band at 394 cm –1 , A 1g symmetry band at 513 cm –1 , and E g symmetry band at 638 cm –1 . 46 These results are coherent with the XRD pattern of the MXene-derived TiO 2 microparticles prepared by the same experimental procedure and reported in a previous manuscript, 34 which revealed the anatase crystalline phase of TiO 2 . It is important to note that anatase TiO 2 is known for the highest photocatalytic efficiency among the TiO 2 polymorphs, making this transformation highly significant for potential applications in photocatalysis. 47 The optical properties of MXene-derived TiO 2 microparticles were investigated using UV–visible spectroscopy. Particularly, the Tauc plot in Figure 2 d was derived from the absorbance measurement, and the energy bandgap of MXene-derived TiO 2 microparticles was determined to be 3.20 eV by extrapolating the linear part of the plot. This value is suitable for the absorption of UV light by the following micromotors. Notably, this value aligns perfectly with the expected optical bandgap for anatase TiO 2 , whose obtainment was also supported by the results of Raman analysis. Furthermore, the observed change in the color of the powders after annealing is consistent with these findings, as they transitioned from the black color of Ti 3 C 2 T x MXene to the white color of TiO 2 . The effective fabrication of metal–semiconductor Janus micromotors was demonstrated through energy-dispersive X-ray (EDX) spectroscopy to obtain elemental mapping images of MXene-derived TiO 2 microparticles after the metal deposition step ( Figure 2 e). These images clearly show the presence and spatial distribution of Ti, O, Au, and Ag elements. The uniform distribution of Ti and O elements over the MXene-derived TiO 2 microparticles confirmed the successful transformation of the MXene precursor into TiO 2 . Additionally, the images provide evidence of the presence of Au and Ag elements after the sputtering process. It is worth noting that these images do not show the characteristic Janus structure which has been extensively observed for spherical microparticles after the deposition of a metal layer by the sputtering technique. 20 , 25 , 48 This is due to the high intrinsic asymmetry, multilayered structure, and rough surface of the MXene-derived TiO 2 microparticles, which makes it challenging to visualize the boundary between the metal-coated side of the microparticles and the uncoated one. Nonetheless, in the EDX mapping image of the Au–TiO 2 micromotor, the Au element is more present on the edge of the micromotor, suggesting the achievement of an asymmetric structure that can allow its directional propulsion under UV light irradiation. Motion Behavior of MXene-Derived Metal–TiO 2 Micromotors The motion behavior of MXene-derived Au–TiO 2 and Ag–TiO 2 micromotors was investigated in pure water to disclose the potential influence of the higher built-in electric field in the Au–TiO 2 Schottky junction compared to the Ag–TiO 2 one, originating from the different work functions of the metals. Initially, a control experiment verified that MXene-derived TiO 2 microparticles show only Brownian motion under UV light irradiation in pure water, as indicated by the representative trajectories in Figure S3 . This observation is in agreement with previous studies demonstrating that the self-propulsion of metal-free single-component photocatalytic semiconductor micro- and nanoparticles has been obtained for intrinsically asymmetric structures or upon exposure to directional illumination, eventually in the presence of additional H 2 O 2 fuel. 49 The asymmetric deposition of a metal layer was expected to turn MXene-derived TiO 2 microparticles into efficient, light-powered micromotors. In this regard, the first experiment aimed at evaluating micromotors’ response to repeated on–off switches of the UV light source at time intervals of approximately 10 s in pure water ( Movie S1 ). Figure 3 a reports time-lapse micrographs showing the trajectories of an Au–TiO 2 micromotor and an Ag–TiO 2 micromotor. Both micromotors exhibited no self-propulsion in the dark in pure water. However, upon turning on UV light irradiation, micromotors manifested the self-propulsion ability, which resulted in a net displacement. Once the dark condition was restored, the micromotors’ movement rapidly stopped. Therefore, in pure water, micromotors could rapidly change their motion status following the presence or absence of UV light. It is worth noting that after the UV light source was turned off, the micromotors occasionally displayed a significant displacement due to recoil phenomena rather than Brownian motion only, as observed after 20 s for the Au–TiO 2 micromotor in Figure 3 a. Nonetheless, the distinct behavior under UV light irradiation was confirmed by the remarkable rise of the instantaneous velocity of the micromotors as a function of time. In the dark, micromotors presented a similar instantaneous velocity of 0–1 μm s –1 , which increased to 1–3 μm s –1 under UV light irradiation. To get more insights into the nature of the micromotors’ motion behavior in dark and light conditions in pure water, movies of several micromotors were recorded and tracked to obtain their trajectories and calculate the mean squared displacement (MSD), denoted as ⟨Δ L 2 ⟩ [μm 2 ]. The magnitude of ⟨Δ L 2 ⟩ reflects the strength of the propulsive force, while its variation over time offers insight into the type of motion. For an ensemble of particles at the time interval Δ t [s], ⟨Δ L 2 ⟩ is defined as: where x (Δ t ) and y (Δ t ) [μm] are the coordinates of the i th particle at the time interval Δ t , x 0 , and y 0 are the initial coordinates of the i th particle, and the brackets “⟨⟩” indicate the average over numerous particles. 50 For a spherical particle on a plane experiencing Brownian motion, i.e., random fluctuations of the particle’s position due to the diffusion process, ⟨Δ L 2 ⟩ is linear with Δ t : where D [μm 2 s –1 ] is the diffusion coefficient. In some cases, ⟨Δ L 2 ⟩ varies as Δ t α , with α > 1, and the particles’ motion is referred to as “superdiffusive.” For particles in the ballistic motion regime, α = 2, and ⟨Δ L 2 ⟩ obeys the following relationship: where v [μm s –1 ] is the velocity. This theoretical framework was used to model the MSD data of micromotors (the results of MSD data fitting are reported in Table S1 ). For both types of micromotors, MSD analysis revealed the linearity between ⟨Δ L 2 ⟩ and Δ t in the dark according to eq 3 , which suggested that micromotors displayed Brownian motion with random displacements in the absence of UV light in pure water. Instead, ⟨Δ L 2 ⟩ followed a quadratic relationship with Δ t , as stated by eq 4 , under UV light irradiation, demonstrating the self-propulsion of micromotors with directional motion in pure water. This finding is in agreement with the trajectories in the time-lapse micrographs in F igure 3 a. Noteworthy, ⟨Δ L 2 ⟩ values of Au–TiO 2 were higher than Ag–TiO 2 . Consequently, by averaging on multiple micromotors, it was found that Au–TiO 2 micromotors’ light-driven motion in pure water was more powerful than that of Ag–TiO 2 micromotors. Moreover, compared to the previously published TiO 2 @Ti 3 C 2 /Pt micromotors, Au–TiO 2 micromotors reached a higher ⟨Δ L 2 ⟩ after 1 s (7 vs 1.5 μm 2 , approximately) under similar experimental conditions. 33 MSD data fitting allowed us to determine the diffusion coefficients and velocities of micromotors in the absence and presence of UV light irradiation in pure water. In the dark, micromotors had comparable diffusion coefficients (0.027–0.029 μm 2 s –1 ). Under UV light irradiation, a 3-fold increase in the diffusion coefficients was found (0.08–0.1 μm 2 s –1 ). Therefore, diffusion coefficients were similar independently to the type of metal material. In contrast, under UV light irradiation, a higher velocity was obtained for Au–TiO 2 micromotors than Ag–TiO 2 (2.6 vs 2.1 μm s –1 ), which is explained by the stronger built-in electric field at the Au–TiO 2 interface. Previous reports, which focused on comparing the velocity of light-driven metal–semiconductor Janus micromotors prepared with different metals, utilized electrochemical measurements to validate velocity results. In this context, metal–semiconductor micromotors were modeled as two electrodes, one for the metal material and the other one for the semiconductor material, with distinct electrochemical potentials. 26 , 28 Then, the researchers argued that the larger the potential difference between the two electrodes, the larger the resulting micromotors’ velocity. For example, Maric et al. prepared metal–TiO 2 micromotors using Pt, Cu, Fe, Ag, and Au. 25 The electrochemical potential analysis allowed them to justify the higher velocity of the Pt–TiO 2 micromotors. Nevertheless, for Fe–TiO 2 and Cu–TiO 2 micromotors, it predicted a lower velocity than Ag–TiO 2 , in contrast with the motion experiments results. This discrepancy is explained by the fact that even though the electrochemical potential difference generally provides valuable information about the velocity of micromotors, it may not consider condensed matter physics phenomena occurring upon the contact between the metal and the semiconductor materials, such as the establishment of a Schottky junction, which affects the charge carrier transfer process at the metal–semiconductor interface. In fact, Fe and Cu have generally larger work functions (4.81 and 4.94 eV) than Ag (4.74 eV), similar to the case of Au. 38 The built-in electric field at metal–TiO 2 Schottky junctions potentially has a double-edged sword effect: on the one hand, it promotes the separation of photogenerated carriers and the accumulation of holes at the interface; on the other hand, the higher potential barrier is detrimental to the electron transfer from the semiconductor to the metal. The investigation of the motion behavior of MXene-derived Au–TiO 2 and Ag–TiO 2 micromotors concludes that the stronger built-in electric field at the Au–TiO 2 interface positively impacts micromotors’ light-driven self-propulsion ability. This result suggests that the higher density of holes beneath the Au layer enhances the reaction rate of the oxidation of water to H + , generating a larger concentration gradient of H + and, thus, a more intense local-electric field responsible for micromotors’ self-electrophoresis, as illustrated in Figure 3 d. Thereby, in metal–TiO 2 micromotors, such a positive effect surpasses the negative effect associated with the higher potential barrier for electron transfer. It is worth noting that the aim of this study was not to achieve velocities higher than those reported in the literature. Nonetheless, Au–TiO 2 micromotors achieved a higher or comparable velocity than γ-Fe 2 O 3 –Bi–V 2 C micromotors and many other fuel-free metal–semiconductor Janus micromotors tested in similar conditions. 25 , 26 , 28 , 35 The metal–TiO 2 micromotors in this study were prepared following the same fabrication procedure of the previously reported MXene-derived γ-Fe 2 O 3 /Pt/TiO 2 microrobots. 34 The only difference is the presence of a Pt layer rather than Au or Ag layers and the inclusion of magnetic nanoparticles to provide the microrobots with magnetic properties. Pt is known to be a better catalyst than Au and Ag for H 2 production from water. Therefore, it is not surprising that the Au–TiO 2 micromotors have a lower velocity than γ-Fe 2 O 3 /Pt/TiO 2 microrobots under UV light irradiation in pure water (2.6 vs 16 μm s –1 ). Still, it is worth noting that Au–TiO 2 micromotors were powered using a 30 times lower intensity of the UV light source than γ-Fe 2 O 3 /Pt/TiO 2 microrobots (∼50 vs ∼1500 mW cm –2 ). Besides, previous studies suggest that the velocity of Au–TiO 2 micromotors can be further increased by improving the compactness of the metal layer, for example by prolonging the sputtering deposition or using nonlayered semiconducting microparticles as the main building block. 51 , 52 To assess the applicability of micromotors in real scenarios, such as in wastewater purification, the UV light-driven self-propulsion of Au–TiO 2 and Ag–TiO 2 micromotors was investigated in raw wastewater, i.e., before entering the wastewater treatment plant and being subjected to any purification process. Figure S4 reports the photograph and micrograph of the wastewater sample, which reveal the massive presence of solid impurities. In this complex environment, the micromotors did not manifest the ability to autonomously move under UV light irradiation, being stuck on the microscope glass slide or obstructed by the surrounding microparticles. Nonetheless, wastewater can be first treated to remove and release the contaminants in a second vessel, where the micromotors can induce their photocatalytic degradation under UV light irradiation. This approach allows for the confinement of potential secondary pollution. Alternatively, the micromotors can be loaded with magnetic nanoparticles, powered by an external magnetic field in wastewater and, simultaneously, activated by UV light irradiation to catch and degrade the pollutants. 7 Since UV light is not biocompatible, the self-propulsion ability of Au–TiO 2 and Ag–TiO 2 micromotors was also investigated under visible light irradiation in pure water. However, the micromotors displayed Brownian motion only ( Movie S4 ). This result is in agreement with the measured energy bandgap of the MXene-derived TiO 2 microparticles (3.20 eV), which indicates that the micromotors can be activated by UV light only. While this discussion may seem comprehensive, it must be noted that many light-powered micro- and nanomotors cannot move in pure water and require additional fuels to manifest their self-propulsion ability. Among the fuels, H 2 O 2 is the most commonly reported, regardless of its potential toxicity at high concentrations. H 2 O 2 contributes to the micro- and nanomotors’ motion with the following reactions involving photogenerated charge carriers. 37 Therefore, to deepen the comparison and understanding of the performance of different metal–semiconductor junctions and related electronic properties, the motion behavior of MXene-derived Au–TiO 2 and Ag–TiO 2 micromotors was also examined at the low concentration of 0.1 wt % H 2 O 2 . Once again, the first experiment explored the response of micromotors in dark and light conditions, influenced by the presence of the fuel ( Movie S2 ). The time-lapse micrographs in Figure 4 a indicate no significant difference for the Au–TiO 2 micromotors in 0.1 wt % H 2 O 2 compared to pure water: random fluctuations of the micromotor’s position were observed in the dark, and a directional motion was noted under UV light irradiation. Conversely, the Ag–TiO 2 micromotor presented a completely different scenario, characterized by directional motion in both the presence and absence of UV light irradiation. This behavior was detected for successive on–off switching of the UV light source, during which the micromotor preserved its mobile status. The lack of control over the on–off status of the Ag–TiO 2 micromotor compared to that of the Au–TiO 2 micromotor was reflected in the temporal variation of its instantaneous velocity. For the Au–TiO 2 micromotor, the on–off switching of the UV light source was followed by a rapid rise–decrease of the velocity (from 0–1 μm s –1 to 2–3 μm s –1 ). On the other hand, the Ag–TiO 2 micromotor displayed a high and constant velocity in the dark (3–5 μm s –1 ), which further increased under UV light irradiation (5–7 μm s –1 ). Hence, the reaction between Ag and H 2 O 2 rendered the micromotor active even in the absence of light, and its activity was amplified upon exposure to UV light irradiation. Of note, the trajectories of Au–TiO 2 micromotors and Ag–TiO 2 micromotors in both Figures 3 a and 4 a are clockwise and anticlockwise, respectively. Even so, it was not possible to control the direction of the motion of micromotors, forcing their movement along a clockwise or anticlockwise rotation. MSD analysis was employed to unambiguously determine the type of motion of micromotors in 0.1 wt % H 2 O 2 . Fitted MSD data are shown in Figure 4 b (the results of MSD data fitting are reported in Table S1 ). First, a linear fit of MSD data of Au–TiO 2 micromotors in the dark using eq 3 was attempted. The linearity between ⟨Δ L 2 ⟩ and Δ t was hinted by the absence of a net displacement in the time-lapse images of the Au–TiO 2 micromotor in the dark in 0.1 wt % H 2 O 2 . Nevertheless, the inconsistency of the fitting results suggested a superdiffusive motion behavior rather than Brownian motion. Then, by assuming a ballistic motion and a quadratic relationship between ⟨Δ L 2 ⟩ and Δ t as in eq 4 , a satisfactory fitting was attained. This outcome revealed the underlying reaction between Au and H 2 O 2 , whose contribution was not powerful enough to overcome Brownian motion. Under UV light irradiation, ⟨Δ L 2 ⟩ of Au–TiO 2 micromotors followed a parabola, as expected from the observed directional motion in 0.1 wt % H 2 O 2 in the time-lapse micrographs in Figure 4 a. Regarding the Ag–TiO 2 micromotors in 0.1 wt % H 2 O 2 , the hypothesis of ballistic motion was confirmed by fitting MSD data with eq 4 . Notably, MSD data of Ag–TiO 2 micromotors in the dark were already higher than UV light irradiated Au–TiO 2 micromotors, before further increasing for UV light-irradiated Ag–TiO 2 micromotors. This observation highlighted a large difference between the metals originating from the presence of the H 2 O 2 fuel. The diffusion coefficients and velocities of micromotors in dark and light conditions in the presence of 0.1 wt % H 2 O 2 were obtained and are compared in Figure 4 c. Under UV light irradiation, the diffusion coefficients increased for both types of micromotors, with Ag–TiO 2 micromotors showing the highest diffusion coefficient (0.6 μm 2 s –1 ). Ag–TiO 2 micromotors exhibited a high diffusion coefficient also in the dark (0.33 μm 2 s –1 ), which was significantly larger than Au–TiO 2 micromotors under the same condition (0.045 μm 2 s –1 ) and comparable to Au–TiO 2 micromotors under UV light irradiation (0.4 μm 2 s –1 ). This trend was discovered also for velocity values (0.6 and 2.8 μm s –1 for Au–TiO 2 micromotors in dark and light conditions, 3.3 and 5.5 μm s –1 for Ag–TiO 2 micromotors in dark and light conditions). For both micromotors, the velocity under UV light irradiation in 0.1 wt % H 2 O 2 improved compared to pure water. In this regard, particularly relevant is the enhancement of the velocity of Ag–TiO 2 micromotors. The most remarkable finding of motion experiments in 0.1 wt % H 2 O 2 is that the presence of the fuel reverts the paradigm of the higher built-in electric field at the metal–semiconductor interface. Indeed, it was revealed that the catalytic properties of the metal may exceed the constrictions deriving from the energy band bending of the metal–semiconductor Schottky junction, as it occurred for Ag–TiO 2 micromotors in the presence of 0.1 wt % H 2 O 2 . On these bases, the behavior of the two types of micromotors was described according to the scheme illustrated in Figure 4 d. In the dark, both Au and Ag metal layers decomposed the H 2 O 2 fuel based on the following reaction. However, the superior catalytic properties of Ag compared to Au concerning H 2 O 2 decomposition led to the generation of a larger product concentration gradient, which allowed overcoming Brownian motion and achieving the self-propulsion via the self-diffusiophoretic mechanism. As a consequence, under UV light irradiation, Au–TiO 2 micromotors marginally benefited from the presence of H 2 O 2 and moved with a velocity slightly higher than that of pure water. On the contrary, the synergy between Ag catalytic activity and self-electrophoresis let Ag–TiO 2 micromotors reach the highest velocity despite the lower built-in electric field of the Ag–TiO 2 contact. Even though Ag–TiO 2 micromotors exhibited a more powerful self-propulsion, it is generally reported that the Ag layer easily dissolves during the catalytic reaction with an H 2 O 2 solution. Therefore, for practical applications, it is crucial to evaluate the potential release of Ag + ions in water. For this purpose, the Ag–TiO 2 micromotors were immersed in 0.1 wt % H 2 O 2 under UV light irradiation for 2 h. At the end of the experiment, the solution was analyzed by inductively coupled plasma-mass spectrometry (ICP-MS), which revealed a concentration of Ag + ions of 0.18 mg L –1 . Although this value is slightly above the secondary maximum contaminant limit (SMCL) of 0.1 mg L –1 set by the United States Environmental Protection Agency (U.S. EPA) and the World Health Organization (WHO), 53 it can be decreased by reducing the concentration of the micromotors. On the other hand, the ability of the Ag–TiO 2 micromotors to release a large number of Ag + ions during their self-propulsion in an H 2 O 2 solution can be beneficial for specific applications, such as the elimination of bacteria and the eradication of bacterial biofilms, due to Ag + ions antibacterial properties. 54 , 55 Polymer Degradation Application In a previous study, similar MXene-derived γ-Fe 2 O 3 /Pt/TiO 2 microrobots were applied to preconcentrate and detect nanoplastics in water via tunable electrostatic interactions and electrochemical measurements using miniaturized electrodes. 34 Conversely, in this study, MXene-derived metal–TiO 2 micromotors were applied for the degradation of synthetic PEG chains with a molecular weight of ∼600 g mol –1 . This polymer is widely used in cosmetics and pharmaceutical formulations. It was selected as a model for persistent water pollutants since its synthetic nature and the covalent bonds linking its organic subunits (monomers) make degradation challenging. 56 , 57 In addition, PEG degradation process can be accurately monitored by mass spectrometry techniques revealing the presence of oligomers even in nanomole concentration. 58 PEG degradation experiments were initially performed under UV light irradiation in pure water. Consequently, Au–TiO 2 micromotors were selected as the optimal micromotors for these experiments due to their higher velocity under this condition. PEG degradation was evaluated by electrospray ionization mass spectrometry (ESI-MS). Figure 5 a compares the spectra of PEG in pure water before any treatment, PEG treated with UV light irradiation in pure water for 8 h, and PEG treated with Au–TiO 2 micromotors under UV light irradiation in pure water for 8 h. The mass distribution of untreated PEG was centered around 600 m / z , as expected. After the treatment with UV light irradiation, the measured mass distribution slightly shifted to lower m / z values and new signals appeared in the m / z region 100–300, indicating that the prolonged exposure to UV light initiated the degradation of the polymer. After the treatment with Au–TiO 2 micromotors under UV light irradiation, the mass distribution further shifted to lower m / z values, while a second and pronounced mass distribution, centered around 300 m / z , suggested a higher PEG oxidation. This is due to the photocatalytic activity of Au–TiO 2 micromotors, which produced reactive oxygen species (ROS) during their self-propulsion, breaking and oxidizing the PEG chains as revealed from mass peak assessment. In pure water, the polymer was not thoroughly degraded. Therefore, H 2 O 2 was involved in the treatments to improve the degradation efficiency of PEG. Of note, H 2 O 2 toxicity, even at concentrations as low as 0.1 wt %, limits its applicability, especially in the biomedical field. However, in water purification applications, H 2 O 2 is often used in combination with (photo)catalysts and light irradiation, allowing Fenton and photo-Fenton reactions that enhance the production of ROS and accelerate the degradation process. On these bases, the experiments were repeated in the presence of 0.1 wt % H 2 O 2 and the obtained spectra are reported in Figure 5 b. A remarkable degradation of PEG was obtained using H 2 O 2 and UV light irradiation due to the UV light breaking the H 2 O 2 molecule to form hydroxyl radicals (OH · ). In the presence of the Au–TiO 2 micromotors, the mass spectrum presented almost no signal for m / z above 400, and a narrow mass distribution around m / z 200 suggested that the long PEG chains were broken into pieces with lower molecular weights. In principle, the degradation efficiency can be further improved by increasing the amount of photocatalysts, H 2 O 2 concentration, UV light irradiation intensity, or treatment duration. In this regard, by prolonging the treatment with Au–TiO 2 micromotors, 0.1 wt % H 2 O 2, and UV light irradiation from 8 to 16 h, a superior PEG degradation was obtained ( Figure S5 ). Control experiments were also performed to compare the MXene-derived Au–TiO 2 micromotors with Au–TiO 2 microparticles prepared by the sputtering deposition of an Au layer on purchased TiO 2 microparticles. In particular, PEG was treated with MXene-derived or commercial Au–TiO 2 microparticles under UV light irradiation in pure water and 0.1 wt % H 2 O 2 for 8 h. The acquired mass spectra are compared in Figure S6 . In pure water, the performances of the MXene-derived Au–TiO 2 micromotors and commercial Au–TiO 2 micromotors were comparable, i.e., the distribution of the degradation products was similar for the two cases. Instead, in the presence of H 2 O 2 , the MXene-derived Au–TiO 2 micromotors showed a significant improvement compared with their cheaper counterpart. This improvement is evident since the mass spectra of the latter still presented distinct signals in the m / z region 400–700 and may be attributed to the larger exposed surface of multilayered TiO 2 microparticles. Despite demonstrating slightly inferior performance in PEG degradation compared to MXene-derived TiO 2 , commercial TiO 2 proved more advantageous overall due to its lower cost. Indeed, the preparation of MXene-derived TiO 2 involves more expensive precursors and additional preparation steps.
Results and Discussion Modeling Metal–TiO 2 Junctions Before presenting the results of the fabrication, characterization, and light-powered motion analysis of MXene-derived metal–TiO 2 Janus micromotors with different types of metal layers, it is essential to provide an introduction explaining the significance of diverse metal–semiconductor junctions in the field of self-propelled micro- and nanomotors. The light-driven self-propulsion of semiconductor-based micro- and nanomotors relies on irradiating a photocatalytic semiconductor micro- or nanoparticle with photons of higher energy than the semiconductor’s energy bandgap. The absorption of these photons promotes electrons from the semiconductor’s valence band to the conduction band, leaving holes in the valence band. Then, for the motor to move, photogenerated charge carriers must migrate to the semiconductor’s surface to react with water. However, the creation of photogenerated electron–hole pairs occurs rapidly, on the order of picoseconds, while their migration to the semiconductor’s surface occurs on a longer time scale, ranging from nanoseconds to microseconds. 36 Consequently, there is a high recombination probability for charge carriers, resulting in no movement of the semiconductor micro- or nanoparticles under light irradiation. To address this issue, the asymmetric deposition of a metal layer on the semiconductor’s surface can be employed, designing a metal–semiconductor Janus micro- or nanomotor. Then, the most common self-propulsion mechanism involves the reaction between photogenerated holes left in the semiconductor’s valence band, migrated to the surface, and photogenerated electrons transferred to the metal. These charges cause the oxidation and reduction of water according to the following reactions. 37 where TiO 2 was assumed as the photocatalytic semiconductor material. As a result, the semiconductor side of the micro- or nanomotor acts as a source of protons (H + ), while the metal side acts as a sink for H + . This behavior establishes an H + concentration gradient, which leads to a local electric field around the charged micro- or nanomotor, ultimately inducing its motion by self-electrophoresis. The work functions of the metal (Φ m ) and semiconductor (Φ s ), which represent the minimum energy required to liberate an electron from the materials, play a crucial role in determining the electronic properties of the junction. When Φ m < Φ s , an Ohmic contact is obtained, where charge carriers can flow from the metal to the semiconductor and vice versa with low resistance. 30 On the contrary, when Φ m > Φ s , a Schottky contact is obtained, characterized by a rectifying behavior due to the realization of a potential barrier at the interface with the metal governing charge carrier flow. As a consequence, in a Schottky contact, a built-in electric field is generated, which enhances the separation of photogenerated electron–hole pairs in the semiconductor, thereby avoiding the recombination process. Numerical simulations were performed by the Semiconductor Module of COMSOL Multiphysics software to model the energy band banding in semiconductor microparticles after the deposition of layers of different metals. Anatase TiO 2 , an n - type semiconductor where electrons are the majority carriers and holes are the minority carriers, was selected as the semiconducting material due to its frequent utilization in the fabrication of light-powered micro- and nanomotors. The chosen metals for this investigation were Au and Ag due to their different work functions (5.47 eV for Au, 4.64 eV for Ag) and their environmental compatibility. 38 Anatase TiO 2 relative dielectric permittivity (85), 39 work function (4.40 eV), 40 energy bandgap (3.20 eV), 41 and the mobility 42 and effective mass 43 of charge carriers were required for the numerical simulations. No effects related to surface defects or temperature dependence were considered in this model. Nanoscale metal–semiconductor junctions were constructed by placing a single Au or Ag nanoparticle (20 nm in diameter), for simplicity, on the surface of a TiO 2 microparticle, as illustrated in Figure S1 . Au–TiO 2 and Ag–TiO 2 contacts behave as Schottky junctions because of the higher work functions of the metals than TiO 2 . In particular, significant upward bending of TiO 2 conduction and valence bands forms. Consequently, a potential barrier arises qV B = q Φ m – q χ for electrons entering TiO 2 from the metal side, where χ is the semiconductor electron affinity and q is the charge. At the same time, the metal–TiO 2 interface is depleted of electrons and enriched in holes. Electrons in the TiO 2 conduction band experience a relatively small potential barrier of qV bi = q Φ m – q Φ s to transfer to the metal. The energy band diagrams and the simulated maps of the TiO 2 conduction band minimum (CBM) energy as a function of depth and distance from the metal nanoparticle’s center of Au–TiO 2 and Ag–TiO 2 Schottky junctions are compared in Figure 1 . The upward bending of the CBM almost approached 1.1 V beneath the Au nanoparticle center and extended 20–30 nm within TiO 2 , while for the Ag nanoparticle, a lower bending was computed (0.2 eV, approximately). This energy band bending generates an electric field under the metal nanoparticles, pointing toward the metals. Figure 1 also reports the simulated 2D maps of the built-in electric field at the TiO 2 surface below the metal nanoparticles. A 3-fold enhancement in the electric field intensity was calculated for the Au–TiO 2 Schottky junction (10 × 10 7 V m –1 ) compared to Ag–TiO 2 (3 × 10 7 V m –1 ). Since the electric field is proportional to the spatial derivative of the CBM energy, the highest intensity of the electric fields was found close to the metal nanoparticles’ edges, resulting in distinct “halo” shapes. Such strong and localized built-in electric fields favor the separation of photogenerated electron–hole pairs, avoiding unwanted recombination phenomena and efficiently utilizing absorbed light. It is reasonable to ask if the built-in electric field intensity significantly influences the motion of light-driven micro- and nanomotors. If so, careful band engineering of metal–semiconductor junctions promises to be the key to achieving powerful light-powered propulsive forces. Fabrication and Characterization of MXene-Derived Metal–TiO 2 Micromotors The interaction between metal and semiconductor in metal–semiconductor Schottky junctions gives rise to interesting phenomena that control the photogenerated charge carriers’ recombination and flow at the interface. The type and strength of these effects depend on the specific combination of the metal and semiconductor materials used. However, their impact on the light-powered motion of metal–semiconductor-based micro- and nanomotors, if any, has not been exhaustively elucidated yet. In this study, the light-driven self-propulsion of micromotors based on a semiconductor in contact with different metals has been investigated. The semiconductor material chosen for this examination is a Ti 3 C 2 T x MXene-derived TiO 2 , combining the high photocatalytic activity of TiO 2 under UV light irradiation with the accordion-like multilayered structure of exfoliated MXenes, which is highly desirable for practical applications. As for the metals, Au and Ag were selected as the metal materials to realize metal–semiconductor interfaces such as those modeled in Figure 1 and for their biocompatibility compared to Pt, whose corrosion in water potentially causes harm to the environment and health. The different fabrication steps of MXene-derived metal–semiconductor micromotors are illustrated in Figure 2 a. Exfoliated Ti 3 C 2 T x microparticles were thermally annealed at 550 °C for 0 min, i.e., an annealing process where the temperature rump-up is immediately followed by the ramp-down, in synthetic air to induce the oxidation of the Ti 3 C 2 T x into TiO 2 . Despite the oxidation of Ti 3 C 2 T x to TiO 2 starting at a lower temperature, previous studies suggested that the thermal annealing process at 550 °C for 0 min results in the optimal morphological, structural, electrochemical, and photocatalytic properties of the obtained TiO 2 . 34 , 44 To enable light-powered motion, it was essential to break the symmetry of the semiconductor microparticle. To achieve this, MXene-derived TiO 2 microparticles were positioned in front of the Au and Ag targets of a sputter coater. This setup allowed for the asymmetric deposition of thin metal layers with a nominal thickness of about 80 nm on the MXene-derived TiO 2 microparticles’ surface. The scanning electron microscopy (SEM) image shown in Figure 2 b provides a closer look at the surface morphology of an MXene-derived TiO 2 microparticle. In comparison to the pristine MXene ( Figure S2 ), the previously smooth surface of Ti 3 C 2 T x multilayer stacks was transformed into aggregated TiO 2 nanoparticles. Despite this transformation, the characteristic multilayered structure of the Ti 3 C 2 T x MXene was maintained due to the rapid thermal annealing process. This preservation of the multilayered structure is especially promising for those applications where contact with the photocatalyst plays a crucial role, such as water purification. The successful conversion of Ti 3 C 2 T x into anatase TiO 2 was convincingly demonstrated through Raman spectroscopy. In Figure 2 c, the Raman spectra of both Ti 3 C 2 T x MXene and MXene-derived TiO 2 microparticles are compared. For Ti 3 C 2 T x MXene microparticles, three distinctive bands were observed in the Raman spectrum: the band at 152 cm –1 was attributed to in-plane Ti–C vibrations in Ti 3 C 2 with E g symmetry, the band at 404 cm –1 was ascribed to the in-plane vibrations of the O atoms in OH-terminated MXene (Ti 3 C 2 (OH) 2 ) with E g symmetry, and the band at 619 cm –1 corresponded to out-of-plane Ti–C vibrations in Ti 3 C 2 with A 1g symmetry, in agreement with a previous study. 45 On the contrary, the Raman spectrum of MXene-derived TiO 2 microparticles exhibited the characteristic bands of anatase TiO 2 , including E g symmetry band at 144 cm –1 , B 1g symmetry band at 394 cm –1 , A 1g symmetry band at 513 cm –1 , and E g symmetry band at 638 cm –1 . 46 These results are coherent with the XRD pattern of the MXene-derived TiO 2 microparticles prepared by the same experimental procedure and reported in a previous manuscript, 34 which revealed the anatase crystalline phase of TiO 2 . It is important to note that anatase TiO 2 is known for the highest photocatalytic efficiency among the TiO 2 polymorphs, making this transformation highly significant for potential applications in photocatalysis. 47 The optical properties of MXene-derived TiO 2 microparticles were investigated using UV–visible spectroscopy. Particularly, the Tauc plot in Figure 2 d was derived from the absorbance measurement, and the energy bandgap of MXene-derived TiO 2 microparticles was determined to be 3.20 eV by extrapolating the linear part of the plot. This value is suitable for the absorption of UV light by the following micromotors. Notably, this value aligns perfectly with the expected optical bandgap for anatase TiO 2 , whose obtainment was also supported by the results of Raman analysis. Furthermore, the observed change in the color of the powders after annealing is consistent with these findings, as they transitioned from the black color of Ti 3 C 2 T x MXene to the white color of TiO 2 . The effective fabrication of metal–semiconductor Janus micromotors was demonstrated through energy-dispersive X-ray (EDX) spectroscopy to obtain elemental mapping images of MXene-derived TiO 2 microparticles after the metal deposition step ( Figure 2 e). These images clearly show the presence and spatial distribution of Ti, O, Au, and Ag elements. The uniform distribution of Ti and O elements over the MXene-derived TiO 2 microparticles confirmed the successful transformation of the MXene precursor into TiO 2 . Additionally, the images provide evidence of the presence of Au and Ag elements after the sputtering process. It is worth noting that these images do not show the characteristic Janus structure which has been extensively observed for spherical microparticles after the deposition of a metal layer by the sputtering technique. 20 , 25 , 48 This is due to the high intrinsic asymmetry, multilayered structure, and rough surface of the MXene-derived TiO 2 microparticles, which makes it challenging to visualize the boundary between the metal-coated side of the microparticles and the uncoated one. Nonetheless, in the EDX mapping image of the Au–TiO 2 micromotor, the Au element is more present on the edge of the micromotor, suggesting the achievement of an asymmetric structure that can allow its directional propulsion under UV light irradiation. Motion Behavior of MXene-Derived Metal–TiO 2 Micromotors The motion behavior of MXene-derived Au–TiO 2 and Ag–TiO 2 micromotors was investigated in pure water to disclose the potential influence of the higher built-in electric field in the Au–TiO 2 Schottky junction compared to the Ag–TiO 2 one, originating from the different work functions of the metals. Initially, a control experiment verified that MXene-derived TiO 2 microparticles show only Brownian motion under UV light irradiation in pure water, as indicated by the representative trajectories in Figure S3 . This observation is in agreement with previous studies demonstrating that the self-propulsion of metal-free single-component photocatalytic semiconductor micro- and nanoparticles has been obtained for intrinsically asymmetric structures or upon exposure to directional illumination, eventually in the presence of additional H 2 O 2 fuel. 49 The asymmetric deposition of a metal layer was expected to turn MXene-derived TiO 2 microparticles into efficient, light-powered micromotors. In this regard, the first experiment aimed at evaluating micromotors’ response to repeated on–off switches of the UV light source at time intervals of approximately 10 s in pure water ( Movie S1 ). Figure 3 a reports time-lapse micrographs showing the trajectories of an Au–TiO 2 micromotor and an Ag–TiO 2 micromotor. Both micromotors exhibited no self-propulsion in the dark in pure water. However, upon turning on UV light irradiation, micromotors manifested the self-propulsion ability, which resulted in a net displacement. Once the dark condition was restored, the micromotors’ movement rapidly stopped. Therefore, in pure water, micromotors could rapidly change their motion status following the presence or absence of UV light. It is worth noting that after the UV light source was turned off, the micromotors occasionally displayed a significant displacement due to recoil phenomena rather than Brownian motion only, as observed after 20 s for the Au–TiO 2 micromotor in Figure 3 a. Nonetheless, the distinct behavior under UV light irradiation was confirmed by the remarkable rise of the instantaneous velocity of the micromotors as a function of time. In the dark, micromotors presented a similar instantaneous velocity of 0–1 μm s –1 , which increased to 1–3 μm s –1 under UV light irradiation. To get more insights into the nature of the micromotors’ motion behavior in dark and light conditions in pure water, movies of several micromotors were recorded and tracked to obtain their trajectories and calculate the mean squared displacement (MSD), denoted as ⟨Δ L 2 ⟩ [μm 2 ]. The magnitude of ⟨Δ L 2 ⟩ reflects the strength of the propulsive force, while its variation over time offers insight into the type of motion. For an ensemble of particles at the time interval Δ t [s], ⟨Δ L 2 ⟩ is defined as: where x (Δ t ) and y (Δ t ) [μm] are the coordinates of the i th particle at the time interval Δ t , x 0 , and y 0 are the initial coordinates of the i th particle, and the brackets “⟨⟩” indicate the average over numerous particles. 50 For a spherical particle on a plane experiencing Brownian motion, i.e., random fluctuations of the particle’s position due to the diffusion process, ⟨Δ L 2 ⟩ is linear with Δ t : where D [μm 2 s –1 ] is the diffusion coefficient. In some cases, ⟨Δ L 2 ⟩ varies as Δ t α , with α > 1, and the particles’ motion is referred to as “superdiffusive.” For particles in the ballistic motion regime, α = 2, and ⟨Δ L 2 ⟩ obeys the following relationship: where v [μm s –1 ] is the velocity. This theoretical framework was used to model the MSD data of micromotors (the results of MSD data fitting are reported in Table S1 ). For both types of micromotors, MSD analysis revealed the linearity between ⟨Δ L 2 ⟩ and Δ t in the dark according to eq 3 , which suggested that micromotors displayed Brownian motion with random displacements in the absence of UV light in pure water. Instead, ⟨Δ L 2 ⟩ followed a quadratic relationship with Δ t , as stated by eq 4 , under UV light irradiation, demonstrating the self-propulsion of micromotors with directional motion in pure water. This finding is in agreement with the trajectories in the time-lapse micrographs in F igure 3 a. Noteworthy, ⟨Δ L 2 ⟩ values of Au–TiO 2 were higher than Ag–TiO 2 . Consequently, by averaging on multiple micromotors, it was found that Au–TiO 2 micromotors’ light-driven motion in pure water was more powerful than that of Ag–TiO 2 micromotors. Moreover, compared to the previously published TiO 2 @Ti 3 C 2 /Pt micromotors, Au–TiO 2 micromotors reached a higher ⟨Δ L 2 ⟩ after 1 s (7 vs 1.5 μm 2 , approximately) under similar experimental conditions. 33 MSD data fitting allowed us to determine the diffusion coefficients and velocities of micromotors in the absence and presence of UV light irradiation in pure water. In the dark, micromotors had comparable diffusion coefficients (0.027–0.029 μm 2 s –1 ). Under UV light irradiation, a 3-fold increase in the diffusion coefficients was found (0.08–0.1 μm 2 s –1 ). Therefore, diffusion coefficients were similar independently to the type of metal material. In contrast, under UV light irradiation, a higher velocity was obtained for Au–TiO 2 micromotors than Ag–TiO 2 (2.6 vs 2.1 μm s –1 ), which is explained by the stronger built-in electric field at the Au–TiO 2 interface. Previous reports, which focused on comparing the velocity of light-driven metal–semiconductor Janus micromotors prepared with different metals, utilized electrochemical measurements to validate velocity results. In this context, metal–semiconductor micromotors were modeled as two electrodes, one for the metal material and the other one for the semiconductor material, with distinct electrochemical potentials. 26 , 28 Then, the researchers argued that the larger the potential difference between the two electrodes, the larger the resulting micromotors’ velocity. For example, Maric et al. prepared metal–TiO 2 micromotors using Pt, Cu, Fe, Ag, and Au. 25 The electrochemical potential analysis allowed them to justify the higher velocity of the Pt–TiO 2 micromotors. Nevertheless, for Fe–TiO 2 and Cu–TiO 2 micromotors, it predicted a lower velocity than Ag–TiO 2 , in contrast with the motion experiments results. This discrepancy is explained by the fact that even though the electrochemical potential difference generally provides valuable information about the velocity of micromotors, it may not consider condensed matter physics phenomena occurring upon the contact between the metal and the semiconductor materials, such as the establishment of a Schottky junction, which affects the charge carrier transfer process at the metal–semiconductor interface. In fact, Fe and Cu have generally larger work functions (4.81 and 4.94 eV) than Ag (4.74 eV), similar to the case of Au. 38 The built-in electric field at metal–TiO 2 Schottky junctions potentially has a double-edged sword effect: on the one hand, it promotes the separation of photogenerated carriers and the accumulation of holes at the interface; on the other hand, the higher potential barrier is detrimental to the electron transfer from the semiconductor to the metal. The investigation of the motion behavior of MXene-derived Au–TiO 2 and Ag–TiO 2 micromotors concludes that the stronger built-in electric field at the Au–TiO 2 interface positively impacts micromotors’ light-driven self-propulsion ability. This result suggests that the higher density of holes beneath the Au layer enhances the reaction rate of the oxidation of water to H + , generating a larger concentration gradient of H + and, thus, a more intense local-electric field responsible for micromotors’ self-electrophoresis, as illustrated in Figure 3 d. Thereby, in metal–TiO 2 micromotors, such a positive effect surpasses the negative effect associated with the higher potential barrier for electron transfer. It is worth noting that the aim of this study was not to achieve velocities higher than those reported in the literature. Nonetheless, Au–TiO 2 micromotors achieved a higher or comparable velocity than γ-Fe 2 O 3 –Bi–V 2 C micromotors and many other fuel-free metal–semiconductor Janus micromotors tested in similar conditions. 25 , 26 , 28 , 35 The metal–TiO 2 micromotors in this study were prepared following the same fabrication procedure of the previously reported MXene-derived γ-Fe 2 O 3 /Pt/TiO 2 microrobots. 34 The only difference is the presence of a Pt layer rather than Au or Ag layers and the inclusion of magnetic nanoparticles to provide the microrobots with magnetic properties. Pt is known to be a better catalyst than Au and Ag for H 2 production from water. Therefore, it is not surprising that the Au–TiO 2 micromotors have a lower velocity than γ-Fe 2 O 3 /Pt/TiO 2 microrobots under UV light irradiation in pure water (2.6 vs 16 μm s –1 ). Still, it is worth noting that Au–TiO 2 micromotors were powered using a 30 times lower intensity of the UV light source than γ-Fe 2 O 3 /Pt/TiO 2 microrobots (∼50 vs ∼1500 mW cm –2 ). Besides, previous studies suggest that the velocity of Au–TiO 2 micromotors can be further increased by improving the compactness of the metal layer, for example by prolonging the sputtering deposition or using nonlayered semiconducting microparticles as the main building block. 51 , 52 To assess the applicability of micromotors in real scenarios, such as in wastewater purification, the UV light-driven self-propulsion of Au–TiO 2 and Ag–TiO 2 micromotors was investigated in raw wastewater, i.e., before entering the wastewater treatment plant and being subjected to any purification process. Figure S4 reports the photograph and micrograph of the wastewater sample, which reveal the massive presence of solid impurities. In this complex environment, the micromotors did not manifest the ability to autonomously move under UV light irradiation, being stuck on the microscope glass slide or obstructed by the surrounding microparticles. Nonetheless, wastewater can be first treated to remove and release the contaminants in a second vessel, where the micromotors can induce their photocatalytic degradation under UV light irradiation. This approach allows for the confinement of potential secondary pollution. Alternatively, the micromotors can be loaded with magnetic nanoparticles, powered by an external magnetic field in wastewater and, simultaneously, activated by UV light irradiation to catch and degrade the pollutants. 7 Since UV light is not biocompatible, the self-propulsion ability of Au–TiO 2 and Ag–TiO 2 micromotors was also investigated under visible light irradiation in pure water. However, the micromotors displayed Brownian motion only ( Movie S4 ). This result is in agreement with the measured energy bandgap of the MXene-derived TiO 2 microparticles (3.20 eV), which indicates that the micromotors can be activated by UV light only. While this discussion may seem comprehensive, it must be noted that many light-powered micro- and nanomotors cannot move in pure water and require additional fuels to manifest their self-propulsion ability. Among the fuels, H 2 O 2 is the most commonly reported, regardless of its potential toxicity at high concentrations. H 2 O 2 contributes to the micro- and nanomotors’ motion with the following reactions involving photogenerated charge carriers. 37 Therefore, to deepen the comparison and understanding of the performance of different metal–semiconductor junctions and related electronic properties, the motion behavior of MXene-derived Au–TiO 2 and Ag–TiO 2 micromotors was also examined at the low concentration of 0.1 wt % H 2 O 2 . Once again, the first experiment explored the response of micromotors in dark and light conditions, influenced by the presence of the fuel ( Movie S2 ). The time-lapse micrographs in Figure 4 a indicate no significant difference for the Au–TiO 2 micromotors in 0.1 wt % H 2 O 2 compared to pure water: random fluctuations of the micromotor’s position were observed in the dark, and a directional motion was noted under UV light irradiation. Conversely, the Ag–TiO 2 micromotor presented a completely different scenario, characterized by directional motion in both the presence and absence of UV light irradiation. This behavior was detected for successive on–off switching of the UV light source, during which the micromotor preserved its mobile status. The lack of control over the on–off status of the Ag–TiO 2 micromotor compared to that of the Au–TiO 2 micromotor was reflected in the temporal variation of its instantaneous velocity. For the Au–TiO 2 micromotor, the on–off switching of the UV light source was followed by a rapid rise–decrease of the velocity (from 0–1 μm s –1 to 2–3 μm s –1 ). On the other hand, the Ag–TiO 2 micromotor displayed a high and constant velocity in the dark (3–5 μm s –1 ), which further increased under UV light irradiation (5–7 μm s –1 ). Hence, the reaction between Ag and H 2 O 2 rendered the micromotor active even in the absence of light, and its activity was amplified upon exposure to UV light irradiation. Of note, the trajectories of Au–TiO 2 micromotors and Ag–TiO 2 micromotors in both Figures 3 a and 4 a are clockwise and anticlockwise, respectively. Even so, it was not possible to control the direction of the motion of micromotors, forcing their movement along a clockwise or anticlockwise rotation. MSD analysis was employed to unambiguously determine the type of motion of micromotors in 0.1 wt % H 2 O 2 . Fitted MSD data are shown in Figure 4 b (the results of MSD data fitting are reported in Table S1 ). First, a linear fit of MSD data of Au–TiO 2 micromotors in the dark using eq 3 was attempted. The linearity between ⟨Δ L 2 ⟩ and Δ t was hinted by the absence of a net displacement in the time-lapse images of the Au–TiO 2 micromotor in the dark in 0.1 wt % H 2 O 2 . Nevertheless, the inconsistency of the fitting results suggested a superdiffusive motion behavior rather than Brownian motion. Then, by assuming a ballistic motion and a quadratic relationship between ⟨Δ L 2 ⟩ and Δ t as in eq 4 , a satisfactory fitting was attained. This outcome revealed the underlying reaction between Au and H 2 O 2 , whose contribution was not powerful enough to overcome Brownian motion. Under UV light irradiation, ⟨Δ L 2 ⟩ of Au–TiO 2 micromotors followed a parabola, as expected from the observed directional motion in 0.1 wt % H 2 O 2 in the time-lapse micrographs in Figure 4 a. Regarding the Ag–TiO 2 micromotors in 0.1 wt % H 2 O 2 , the hypothesis of ballistic motion was confirmed by fitting MSD data with eq 4 . Notably, MSD data of Ag–TiO 2 micromotors in the dark were already higher than UV light irradiated Au–TiO 2 micromotors, before further increasing for UV light-irradiated Ag–TiO 2 micromotors. This observation highlighted a large difference between the metals originating from the presence of the H 2 O 2 fuel. The diffusion coefficients and velocities of micromotors in dark and light conditions in the presence of 0.1 wt % H 2 O 2 were obtained and are compared in Figure 4 c. Under UV light irradiation, the diffusion coefficients increased for both types of micromotors, with Ag–TiO 2 micromotors showing the highest diffusion coefficient (0.6 μm 2 s –1 ). Ag–TiO 2 micromotors exhibited a high diffusion coefficient also in the dark (0.33 μm 2 s –1 ), which was significantly larger than Au–TiO 2 micromotors under the same condition (0.045 μm 2 s –1 ) and comparable to Au–TiO 2 micromotors under UV light irradiation (0.4 μm 2 s –1 ). This trend was discovered also for velocity values (0.6 and 2.8 μm s –1 for Au–TiO 2 micromotors in dark and light conditions, 3.3 and 5.5 μm s –1 for Ag–TiO 2 micromotors in dark and light conditions). For both micromotors, the velocity under UV light irradiation in 0.1 wt % H 2 O 2 improved compared to pure water. In this regard, particularly relevant is the enhancement of the velocity of Ag–TiO 2 micromotors. The most remarkable finding of motion experiments in 0.1 wt % H 2 O 2 is that the presence of the fuel reverts the paradigm of the higher built-in electric field at the metal–semiconductor interface. Indeed, it was revealed that the catalytic properties of the metal may exceed the constrictions deriving from the energy band bending of the metal–semiconductor Schottky junction, as it occurred for Ag–TiO 2 micromotors in the presence of 0.1 wt % H 2 O 2 . On these bases, the behavior of the two types of micromotors was described according to the scheme illustrated in Figure 4 d. In the dark, both Au and Ag metal layers decomposed the H 2 O 2 fuel based on the following reaction. However, the superior catalytic properties of Ag compared to Au concerning H 2 O 2 decomposition led to the generation of a larger product concentration gradient, which allowed overcoming Brownian motion and achieving the self-propulsion via the self-diffusiophoretic mechanism. As a consequence, under UV light irradiation, Au–TiO 2 micromotors marginally benefited from the presence of H 2 O 2 and moved with a velocity slightly higher than that of pure water. On the contrary, the synergy between Ag catalytic activity and self-electrophoresis let Ag–TiO 2 micromotors reach the highest velocity despite the lower built-in electric field of the Ag–TiO 2 contact. Even though Ag–TiO 2 micromotors exhibited a more powerful self-propulsion, it is generally reported that the Ag layer easily dissolves during the catalytic reaction with an H 2 O 2 solution. Therefore, for practical applications, it is crucial to evaluate the potential release of Ag + ions in water. For this purpose, the Ag–TiO 2 micromotors were immersed in 0.1 wt % H 2 O 2 under UV light irradiation for 2 h. At the end of the experiment, the solution was analyzed by inductively coupled plasma-mass spectrometry (ICP-MS), which revealed a concentration of Ag + ions of 0.18 mg L –1 . Although this value is slightly above the secondary maximum contaminant limit (SMCL) of 0.1 mg L –1 set by the United States Environmental Protection Agency (U.S. EPA) and the World Health Organization (WHO), 53 it can be decreased by reducing the concentration of the micromotors. On the other hand, the ability of the Ag–TiO 2 micromotors to release a large number of Ag + ions during their self-propulsion in an H 2 O 2 solution can be beneficial for specific applications, such as the elimination of bacteria and the eradication of bacterial biofilms, due to Ag + ions antibacterial properties. 54 , 55 Polymer Degradation Application In a previous study, similar MXene-derived γ-Fe 2 O 3 /Pt/TiO 2 microrobots were applied to preconcentrate and detect nanoplastics in water via tunable electrostatic interactions and electrochemical measurements using miniaturized electrodes. 34 Conversely, in this study, MXene-derived metal–TiO 2 micromotors were applied for the degradation of synthetic PEG chains with a molecular weight of ∼600 g mol –1 . This polymer is widely used in cosmetics and pharmaceutical formulations. It was selected as a model for persistent water pollutants since its synthetic nature and the covalent bonds linking its organic subunits (monomers) make degradation challenging. 56 , 57 In addition, PEG degradation process can be accurately monitored by mass spectrometry techniques revealing the presence of oligomers even in nanomole concentration. 58 PEG degradation experiments were initially performed under UV light irradiation in pure water. Consequently, Au–TiO 2 micromotors were selected as the optimal micromotors for these experiments due to their higher velocity under this condition. PEG degradation was evaluated by electrospray ionization mass spectrometry (ESI-MS). Figure 5 a compares the spectra of PEG in pure water before any treatment, PEG treated with UV light irradiation in pure water for 8 h, and PEG treated with Au–TiO 2 micromotors under UV light irradiation in pure water for 8 h. The mass distribution of untreated PEG was centered around 600 m / z , as expected. After the treatment with UV light irradiation, the measured mass distribution slightly shifted to lower m / z values and new signals appeared in the m / z region 100–300, indicating that the prolonged exposure to UV light initiated the degradation of the polymer. After the treatment with Au–TiO 2 micromotors under UV light irradiation, the mass distribution further shifted to lower m / z values, while a second and pronounced mass distribution, centered around 300 m / z , suggested a higher PEG oxidation. This is due to the photocatalytic activity of Au–TiO 2 micromotors, which produced reactive oxygen species (ROS) during their self-propulsion, breaking and oxidizing the PEG chains as revealed from mass peak assessment. In pure water, the polymer was not thoroughly degraded. Therefore, H 2 O 2 was involved in the treatments to improve the degradation efficiency of PEG. Of note, H 2 O 2 toxicity, even at concentrations as low as 0.1 wt %, limits its applicability, especially in the biomedical field. However, in water purification applications, H 2 O 2 is often used in combination with (photo)catalysts and light irradiation, allowing Fenton and photo-Fenton reactions that enhance the production of ROS and accelerate the degradation process. On these bases, the experiments were repeated in the presence of 0.1 wt % H 2 O 2 and the obtained spectra are reported in Figure 5 b. A remarkable degradation of PEG was obtained using H 2 O 2 and UV light irradiation due to the UV light breaking the H 2 O 2 molecule to form hydroxyl radicals (OH · ). In the presence of the Au–TiO 2 micromotors, the mass spectrum presented almost no signal for m / z above 400, and a narrow mass distribution around m / z 200 suggested that the long PEG chains were broken into pieces with lower molecular weights. In principle, the degradation efficiency can be further improved by increasing the amount of photocatalysts, H 2 O 2 concentration, UV light irradiation intensity, or treatment duration. In this regard, by prolonging the treatment with Au–TiO 2 micromotors, 0.1 wt % H 2 O 2, and UV light irradiation from 8 to 16 h, a superior PEG degradation was obtained ( Figure S5 ). Control experiments were also performed to compare the MXene-derived Au–TiO 2 micromotors with Au–TiO 2 microparticles prepared by the sputtering deposition of an Au layer on purchased TiO 2 microparticles. In particular, PEG was treated with MXene-derived or commercial Au–TiO 2 microparticles under UV light irradiation in pure water and 0.1 wt % H 2 O 2 for 8 h. The acquired mass spectra are compared in Figure S6 . In pure water, the performances of the MXene-derived Au–TiO 2 micromotors and commercial Au–TiO 2 micromotors were comparable, i.e., the distribution of the degradation products was similar for the two cases. Instead, in the presence of H 2 O 2 , the MXene-derived Au–TiO 2 micromotors showed a significant improvement compared with their cheaper counterpart. This improvement is evident since the mass spectra of the latter still presented distinct signals in the m / z region 400–700 and may be attributed to the larger exposed surface of multilayered TiO 2 microparticles. Despite demonstrating slightly inferior performance in PEG degradation compared to MXene-derived TiO 2 , commercial TiO 2 proved more advantageous overall due to its lower cost. Indeed, the preparation of MXene-derived TiO 2 involves more expensive precursors and additional preparation steps.
Conclusions This study investigated the light-powered motion of metal–semiconductor micromotors based on MXene-derived TiO 2 microparticles in contact with metals (Au and Ag) characterized by different work functions, leading to diverse electronic properties for the metal–TiO 2 interface. The fabrication involved transforming exfoliated Ti 3 C 2 T x MXene microparticles into multilayered TiO 2 by thermal annealing and then depositing thin Au or Ag layers on their surface, asymmetrically, by the sputtering technique. The motion behavior of the resulting MXene-derived metal–TiO 2 micromotors was studied in pure water and 0.1 wt % H 2 O 2 as a fuel. Under UV light irradiation, both types of micromotors showed self-propulsion in pure water via self-electrophoresis, with Au–TiO 2 micromotors exhibiting velocities higher than those of Ag–TiO 2 micromotors. This finding was explained by the more intense built-in electric field at the Au–TiO 2 Schottky junction, which improves the photogenerated electron–hole pairs separation in TiO 2 and hole accumulation at the interface, as indicated by numerical simulations, which contribute to the self-electrophoretic motion mechanism. In the presence of H 2 O 2 , the behavior changed significantly. Au–TiO 2 micromotors displayed marginal improvement in velocity under exposure to UV light, while Ag–TiO 2 micromotors manifested autonomous motion even in the absence of UV light by self-diffusiophoresis and the highest velocity under UV light irradiation due to the synergy between Ag’s catalytic activity and self-electrophoresis. Overall, the study demonstrates the importance of metal–semiconductor interfaces and the competition with metal catalytic properties in the light-driven motion of micro- and nanomotors, highlighting the impact of the metal choice on their performance. In addition, the developed Au–TiO 2 micromotors proved a great potential in the remediation of polymer-contaminated water, cleaving PEG chains by photocatalysis in pure water and photo-Fenton reaction in the presence of H 2 O 2 . These conclusions provide valuable insights into designing and optimizing metal–semiconductor interfaces for photocatalytic micro- and nanomotors and their applications. Furthermore, it is anticipated that the absence of potential barriers obstructing the flow of electrons and the built-in electric field pointing to the semiconductor in Ohmic junctions can favor photogenerated charge separation within the semiconductor and, at the same time, successive electron transfer to the metal, resulting in enhanced self-propulsion. To verify this hypothesis, future investigations will focus on comparing the performance of metal–semiconductor Ohmic and Schottky junctions in the light-powered micro- and nanomotors field.
Light-powered micro- and nanomotors based on photocatalytic semiconductors convert light into mechanical energy, allowing self-propulsion and various functions. Despite recent progress, the ongoing quest to enhance their speed remains crucial, as it holds the potential for further accelerating mass transfer-limited chemical reactions and physical processes. This study focuses on multilayered MXene-derived metal–TiO 2 micromotors with different metal materials to investigate the impact of electronic properties of the metal–semiconductor junction, such as energy band bending and built-in electric field, on self-propulsion. By asymmetrically depositing Au or Ag layers on thermally annealed Ti 3 C 2 T x MXene microparticles using sputtering, Janus structures are formed with Schottky junctions at the metal–semiconductor interface. Under UV light irradiation, Au–TiO 2 micromotors show higher self-propulsion velocities due to the stronger built-in electric field, enabling efficient photogenerated charge carrier separation within the semiconductor and higher hole accumulation beneath the Au layer. On the contrary, in 0.1 wt % H 2 O 2 , Ag–TiO 2 micromotors reach higher velocities both in the presence and absence of UV light irradiation, owing to the superior catalytic properties of Ag in H 2 O 2 decomposition. Due to the widespread use of plastics and polymers, and the consequent occurrence of nano/microplastics and polymeric waste in water, Au–TiO 2 micromotors were applied in water remediation to break down polyethylene glycol (PEG) chains, which were used as a model for polymeric pollutants in water. These findings reveal the interplay between electronic properties and catalytic activity in metal–semiconductor junctions, offering insights into the future design of powerful light-driven micro- and nanomotors with promising implications for water treatment and photocatalysis applications.
Experimental Section Fabrication of MXene-Derived Metal–TiO 2 Micromotors Exfoliated Ti 3 C 2 T x MXene microparticles (XFNano, China) were suspended in pure water (18 MΩ cm) at a concentration of 1 mg mL –1 and sonicated for 1 h in a bath sonicator. Then, the suspension was dropped on microscope glass slides, serving as a substrate, and dried overnight. The resulting Ti 3 C 2 T x MXene films were transferred into a tubular furnace and underwent a thermal annealing process in synthetic air (2.5 L min –1 ) at 550 °C for 0 min, i.e., the temperature rump-up immediately followed by the temperature ramp-down, with a heating rate of 10 °C min –1 , obtaining photocatalytic MXene-derived TiO 2 microparticles. Afterward, highly asymmetric Janus structures were obtained by depositing thin metal layers on the annealed microparticles by a sputtering technique. In particular, an Emitech K550X sputter coater (Quorum, Ringmer, East Sussex, U.K.) was used to deposit Au and Ag layers from high purity targets (99%) under the sputtering conditions of 50 mA current and 12 min deposition time, fabricating MXene-derived Au–TiO 2 and Ag–TiO 2 micromotors with a nominal metal layer thickness of about 80 nm. Finally, a scalpel was used to detach the micromotors from the substrates mechanically. Characterization Techniques Surface morphology and elemental composition of samples were characterized by a Gemini field emission SEM Carl Zeiss Supra 25. Raman spectra of samples before and after the thermal annealing process were acquired in backscattering geometry using a HORIBA Jobin-Yvon system coupled to an Olympus BX41 microscope. He–Ne laser radiation (633 nm wavelength, ∼5 mW power) was focused to a spot size of 1 μm through a 100× microscope objective. A 550 mm focal length spectrometer with 1800 lines mm –1 grating was used to collect the Raman emission from samples. The optical bandgap of MXene-derived TiO 2 microparticles was determined using a PerkinElmer LAMBDA 1050+ UV/vis/NIR spectrophotometer furnished with an integrating sphere. Motion Experiments The light-powered motion of MXene-derived micromotors was tested in pure water and 0.1 wt % H 2 O 2 (Merck, 30 wt %) without any surfactant using a Leica DMI4000 B inverted optical microscope equipped with a Basler digital camera (acA1920-155uc). A light source (Leica EL6000), coupled with fluorescence filter cubes, allowed micromotors to be irradiated with UV light (375 nm wavelength, ∼50 mW cm –2 intensity) or visible light (480 nm wavelength, ∼200 mW cm –2 intensity) to induce their movement. A control experiment was also performed on a raw wastewater sample. Movies of micromotors’ motion behavior were recorded at a frame rate of 10 fps through Pylon Viewer software and analyzed using Fiji software to obtain their trajectories and calculate their MSD and velocity. Ag Dissolution Experiment To investigate the potential corrosion of the Ag layer of MXene-derived Ag–TiO 2 micromotors in the presence of H 2 O 2 , the micromotors (1 mg mL –1 ) were left in 0.1% H 2 O 2 under UV light irradiation for 2 h. Then, the suspension was centrifuged at 4000 rpm for 5 min to separate the micromotors from the supernatant, which was further analyzed using a Nexion 300X ICP/MS instrument (PerkinElmer Inc., Waltham, Massachusetts, USA) using the kinetic energy discrimination mode (KED) for interference suppression. Before analysis, the sampled solution was diluted, acidified with nitric acid, and added to the internal standards required for quantifying Ag + ions. The instrumental analyses were repeated 3 times for higher accuracy and were validated by comparing it with a standard reference material, SRM 1643f Trace Elements in Water. Polymer Degradation Experiments PEG degradation experiments were performed in UV light-transparent cuvettes containing aqueous suspensions with 1 mg mL –1 synthetic PEG 600, 1 mg mL –1 MXene-derived Au–TiO 2 micromotors, and, eventually, 0.1 wt % H 2 O 2 . The cuvettes were exposed to UV light irradiation for different durations (8 or 16 h) using a 365 nm UV LED lamp. At the end of the treatment, the suspensions were centrifugated at 4000 rpm for 5 min to separate the micromotors from the supernatants, which were further analyzed to record mass spectra by ESI-MS. Control experiments were performed under the same experimental conditions with Au–TiO 2 microparticles, prepared by the sputtering deposition of an Au layer on purchased TiO 2 microparticles (Merck, SKU 224227). ESI-MS was performed using a Thermo Scientific Orbitrap Exploris 120 (Thermo Fisher Scientific, Bremen, Germany) with a heated electrospray ionization interface. Mass spectra were recorded in positive ion mode, by direct infusion of sample solutions, in the m / z range 100–1000 at a resolving power of 60000 (full-width-at-half-maximum, RFWHM, at m / z 200), under the following conditions: capillary temperature 300 °C, capillary voltage 3.5 kV, nebulizer gas (nitrogen) flow rate of 10 arbitrary units, and auxiliary gas flow rate of 2 arbitrary units. The Orbitrap MS system was tuned and calibrated using a Thermo Scientific Pierce TM FlexMix TM calibration solution. Data acquisition and analysis were performed using the Excalibur software.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c13470 . Simulated electron concentration at the metal–TiO 2 interface; SEM image of a Ti 3 C 2 T x MXene microparticle; trajectories of MXene-derived TiO 2 microparticles under UV light irradiation in pure water; MSD data fitting results; photograph and micrograph of the raw wastewater sample; ESI-MS spectra of PEG treated for different durations; and ESI-MS spectra of PEG treated with MXene-derived and commercial Au–TiO 2 ( PDF ) MXene-derived micromotors under successive on–off switching of UV light irradiation in pure water ( AVI ) MXene-derived micromotors under successive on–off switching of UV light irradiation in 0.1 wt % H 2 O 2 ( AVI ) MXene-derived micromotors under UV light irradiation in raw wastewater ( AVI ) MXene-derived micromotors under visible light irradiation in pure water ( AVI ) Supplementary Material Author Contributions M.U. synthesized and characterized the micromotors, investigated their motion behavior, performed polymer degradation experiments, analyzed and interpreted the data, and wrote the manuscript. L.B. performed the numerical simulations. S.D. and S.C.C. performed ICP-MS and ESI-MS analyses and provided spectra interpretation. M.U. conceived the idea. M.U. and S.M. supervised the project. The authors declare no competing financial interest. Acknowledgments M.U. acknowledges the financial support by the Ministry of University and Research, under the “PNRR—Missione 4 “Istruzione e Ricerca”—Componente 2 “Dalla Ricerca all’Impresa” Investimento 1.2 “Finanziamento di progetti presentati da giovani ricercatori”, project ID: SOE_0000044, CUP number: E63C22002970006. This work was also partially funded by the European Union (NextGeneration EU), through the MUR-PNRR project SAMOTHRACE—Sicilian MicronanoTech Research and Innovation Center (ECS00000022, CUP B63C22000620005). Prof. L. Lanzanò (University of Catania) and the Bionanotech Research, and Innovation Tower (BRIT) laboratory of the University of Catania (Grant no. PONa3_00136 financed by the MIUR) are gratefully acknowledged for the optical microscopy facility. The authors thank Dr. G. Indelli (University of Catania) and G. Pantè (CNR-IMM) for the technical support and Dr. G. Franzò (CNR-IMM) for providing the optical power meter.
CC BY
no
2024-01-16 23:43:51
ACS Appl Mater Interfaces. 2023 Dec 22; 16(1):1293-1307
oa_package/1f/32/PMC10788834.tar.gz
PMC10788836
0
These important subjects have been chosen to have free online access. The British Medical Bulletin website also has a section to celebrate its amazing archive (see https://academic.oup.com/bmb/pages/from-the-archives ) and details of the Nobel Prize-winners who wrote for the Bulletin and went on to win the accolade. It also has information on which reviews have been most widely cited and information about the OUP blog, which often has input from the Bulletin authors and editors. The first free online access review is: Slowing down or returning to normal? Life expectancy improvements in Britain compared to five large European countries before the COVID-19 pandemic by Minton, Hiam, Dorling and McKee from the University of Glasgow, University of Oxford and the London School of Tropical Medicine and Hygiene, UK. They say life expectancy is an important summary measure of population health. In the absence of a significant event like war or disease outbreak, trends should and historically have, increased over time, albeit with some fluctuations. Life expectancy improvements in Great Britain have stalled in recent years, and a similar stalling was seen in other high-income countries during the mid-2010s. The significance and causes of the slowdown in improvement in life expectancy in Britain are disputed. Other measures, such as lifespan disparity complement it in understanding changing trends. Whilst annual fluctuations in life expectancy are expected, continued stalls should raise concern. The three British nations examined were the only ones amongst these European countries to experience stalling of life expectancy gains in both sexes. Whilst Britain is making less progress in health than similar countries, more research is needed to explain why. The second free-to-view review is entitled: Monkeypox: a review of the 2022 outbreak by Lim, Whitehorn and Rivett from the Cambridge University Hospitals NHS Foundation Trust and Public Health Laboratory, Cambridge, UK. They say that In May 2022, the World Health Organization declared a multi-country monkeypox outbreak following cases reported from 12 member states which were not endemic for monkeypox virus. There are variations in clinical presentations seen in the current outbreak, which has not been seen in prior outbreaks. More research is needed to investigate the reasons for these differences. The higher number of human immunodeficiency virus (HIV)-positive patients in the current outbreak has allowed a better description of the disease in patients co-infected with HIV and monkeypox. The absence of more severe symptoms in HIV-positive patients in the current outbreak could possibly be because most of these patients had well-controlled HIV. Current treatment and vaccination options have been extrapolated from studies of other Orthopox viruses. There remains a need for more data on the safety and efficacy of these options in the context of monkeypox infections. In the rest of this volume the third review is titled: Return to sport or work following surgical management of scapholunate ligament (SLL) injury by Liew, Dingle. Semple and Rust from the University of Edinburgh and the University of Manchester, UK. They say that their systematic review aims to compare the rate and time to return to sport or work following surgical interventions for isolated SLL injury. Fourteen papers, including six different surgical interventions, met the criteria for final analysis. All surgical techniques demonstrated acceptable rates of return to work or sport (>80%). The optimal surgical intervention for isolated SLL injury remains undetermined due to heterogeneity and limited sample sizes of published studies. This systematic review has provided clarification on the available literature on treatment modalities for isolated SLL injuries in the absence of osteoarthritis (OA). Prospective, randomized, primary studies are needed to establish optimal treatment for acute isolated SLL injuries. The fourth review is entitled: Femoroacetabular impingement syndrome negatively affects the range of motion of the affected hip by Albertoni, Bargen, Hoxha, Munari, Maffulli and Castellini from the University of Genoa, Scientificao Ortopedico Galeazzi, Insituto di Sanito Pubblica Italy; the Barts and London School of Medicine, London, UK. They say it is unclear whether femoroacetabular impingement syndrome (FAIS) affect hip range of motion (ROM). A total of 17 studies were included. Comparison of FAIS patients versus healthy controls showed that hip ROM was clinically and statistically reduced in FAIS for internal rotation, hip flexion; adduction and flexion-abduction and external rotation ranging from low to high certainty of evidence. Comparison of FAIS versus healthy controls showed no statistically significant differences in any direction of movement, albeit with uncertainty of evidence. Hip ROM may be reduced in all directions except extension in FAIS compared to controls. The fifth review is entitled: Cancer survivors and adverse work outcomes by de Boer, de Wind, Coenen, van Ommen, Greidanus, Zegers, Duijts and Tamminga from the University of Amsterdam, Amsterdam Public Health, Amsterdam Cancer Centre, Amsterdam; Movement Sciences and the Netherlands Comprehensive Cancer Organization, Utrecht, the Netherlands. They say that the number of cancer survivors of working age is rising. A range of factors is associated with adverse work outcomes such as prolonged sick leave, delayed return to work, disability pension and unemployment in cancer survivors. They also include cancer type and treatment, fatigue, cognitive functioning, work factors and elements of health care systems. Effective supportive interventions encompass physical and multicomponent interventions. The role of behaviour determinants and of legislative and insurance systems, is unclear. The optimal timing of delivering supportive interventions is uncertain. Further focus on vulnerable groups, including, specific cancer types and those with lower income, lower educational level and in precarious employment, is needed. The sixth review is entitled: Knee osteoarthritis, joint laxity and patient recorded outcome measures (PROM)s following conservative management versus surgical reconstruction for anterior cruciate ligament (ACL) rupture by Migliori, Oliva, Torsiello, Eschweiler, Hildebrand and Maffulli from the University of Salerno, Italy; University of Aachen, Germany and Queen Mary University of London, UK. They say that patients who rupture their ACL can be managed conservatively or undergo reconstruction surgery. Several studies published by July 2022 compare surgical and conservative management following ACL rupture. The latest evidence suggests that surgical management may expose patients to an increased risk of early onset knee OA. After the initial trauma, surgical reconstruction may produce more damage to the intra-articular structures compared to conservative management. The present study compared surgical reconstruction versus conservative management for primary ACL ruptures in terms of joint laxity, patient reported outcome measures (PROMs), and rate of OA. ACL reconstruction provides significant improvement in joint laxity compared to conservative management, but is associated with a significantly greater rate of knee OA, despite similar results at PROMs assessment. The seventh review is entitled: Methods of assessing value for money of UK-based early childhood public health interventions by Richardson, Murphy, Hinde, Fulbright and Padgett from the University of York. They say that economic evaluation has an important role to play in the demonstration of value for money of early childhood public health interventions, however, concerns have been raised regarding their consistent application and relevance to commissioners. This systematic review of the literature aims to collate the breadth of the existing economic evaluation evidence of these interventions and to identify the approaches adopted in the assessment of value. This review considered inconsistencies across methodological approaches used to demonstrate value for money. Future resource allocation decisions regarding early childhood public health interventions may benefit from consistency in the evaluative frameworks and health outcomes captured, as well as consistency in approaches to incorporating non-health costs and outcomes; incorporating equity concerns; and the use of appropriate time horizon. The eighth review is entitled: Children and bioethics: Clarifying consent and assent in medical and research settings by Spriggs from the University of Melbourne, Australia. She says that the concept of consent in the paediatric setting is complex and confusing. Clinicians and researchers want to know whose consent they should obtain, when a child can provide independent consent, and how that is determined. The aim of this article is to establish what produces the justification to proceed with medical or research interventions involving children and the role of consent. It clarifies concepts such as consent, assent, capacity and competence. Engaging with children and involving them in decisions about matters that affect them is a good thing. It examines the role of competence or capacity and the question of when a child can provide sole consent. Flawed assumptions around competence/capacity are common. An account of children’s well-being should accommodate children’s interests during the transition to adulthood. The ninth review is entitled: Implementing brief and low intensity psychological interventions for children and young people with internalizing disorders by Roach, Cullinan, Shafran, Heymen and Bennett from the Institute of Child Health and University College, London, UK. They say that many children fail to receive the mental health treatments they need, despite strong evidence demonstrating efficacy of brief and low intensity psychological interventions. This review identifies the barriers and facilitators to their implementation. Studies identified organizational demands, lack of implementation strategy and stigma as barriers to implementation, and the need for clear training and plans for implementation as facilitators. No standardized implementation outcomes were used across papers so meta-analysis was not possible. Barriers and facilitators have been clearly identified across different settings. Longitudinal studies can identify methods and processes for enhancing long term implementation and considers ways to monitor and evaluate uptake into routine practice. The tenth review is entitled: Loneliness—a clinical primer by Lederman from Hong Kong University. He says loneliness is prevalent worldwide. It is also associated with an increased risk for depression, high blood pressure, cardiovascular disease, stroke and early death. As such, loneliness is a major public health issue. This paper summarizes the salient points clinicians should know and encourages clinicians to assume an active part in the identification, mitigation and prevention of loneliness. Loneliness is a distressful subjective experience, which does not always correlate with social isolation. Identifying loneliness in the clinic may be time consuming and challenging. There is scarce robust evidence to support interventions. More research is needed to further elucidate the health impacts of loneliness as well as to find evidence-based interventions to prevent and mitigate loneliness that could then be implemented by policy-makers and clinicians. The eleventh revie is entitled: Micro RNA in meniscal ailments: current concepts by Migliorini, Vecchio, Giorgino, Eschweiler, Hildebrand and Maffulli from the University of Aachen, Germany; the University of Salerno, the Orthopaedic Institute Galeazzi, Milano, Italy; Barts and the London School of Medicine and the University of Keele, UK, They say Micro ribonucleic acid (miRNAs) are short non-coding RNAs, which act primarily in posttranscriptional gene silencing, and are attracting increasing interest in musculoskeletal conditions. Recently, the potential of miRNAs as biomarkers for diagnosis and treatment of meniscal injuries has been postulated. Evaluation of the role of miRNAs in patients with meniscal tears is still controversial. A systematic review was conducted to investigate the potential of miRNA in the diagnosis and management of meniscal damage. Intra-articular injection of microRNA-210 in vivo may represent a potential innovative methodology for the management of meniscal injuries. Characterization of the microRNAs expression in the synovial fluid could lead to the development of better early diagnosis and management strategies for meniscal tears. March 2023
CC BY
no
2024-01-16 23:43:51
Br Med Bull. 2023 Apr 5; 145(1):1-4
oa_package/7e/0f/PMC10788836.tar.gz
PMC10788837
37391365
Introduction There is a growing call to approach gambling from a public health perspective in the UK 1 as has been undertaken in some other countries. 2 Gambling is a heavily marketed and commonly participated in activity; the Gambling Commission telephone survey estimates over 40% of people aged at least 16 years in the UK have gambled in the last 4 weeks. 3 In their 2021 evidence review, Public Health England reported that half of the UK population participates in gambling, with 0.5% of the population experiencing a high level of harm. 4 Gambling-related harm disproportionately affects disadvantaged and marginalized groups, exacerbates existing health and social inequalities 5 and intersects with challenges including suicide prevention, alcohol, smoking, interpersonal violence, criminality and homelessness. 6 Disordered gambling impacts physical and mental health in a range of ways and has significant community and societal costs. Recent estimates in the UK suggest the economic burden of harmful gambling is approximately £1.27 billion, including £342.2 million in mental and physical health harms and £79.5 million in employment and education harms. 7 Disordered gambling is associated with a greater proximity to and density of in-person gambling facilities, 8 , 9 and a person with disordered gambling is more likely to live in a deprived area, 10 be unemployed, 11 smoke, consume alcohol excessively and have mental ill health. 7 Each person with a gambling disorder has on average 6–10 affected others who may experience relationship strain, stress and financial loss, with interrelated health impacts. 7 Geospatial mapping is a technique used to display and describe the distribution and variation of information within a specified geography. Modern mapping technologies have been increasingly used to assist in developing public health initiatives, 12 and visualize social determinants of health in association with rates of health conditions and behaviours, for example in studying the prevalence of non-communicable chronic disease, 13 and links between alcohol outlet density and violent crime. 14 Mapping has been used to explore gambling at local levels, including in the UK; 15 , 16 however, previous studies have focused on urban and city locations, and mapping has been at a broad geographical level. In recent years, local public health teams have shown increasing interest and ambition in gambling harm prevention, yet there are few shared examples of gambling harm mapping being used to inform local public health practice. Lincolnshire is a large county in England host to urban, rural and coastal communities. Levels of wealth, deprivation and infrastructure vary significantly across the county, and coastal areas experience significant health challenges. 17 The local prevalence of gambling-related harms was not known. Geospatial mapping techniques were employed as part of a local health needs assessment, to inform and develop a local public health approach to gambling harm. Aim To map gambling-related harm in Lincolnshire using routine data and geospatial mapping to predict ‘hotspots’ of harm, and to compare findings between urban, rural and coastal areas.
Method We produced three heat maps of Lincolnshire using QGIS version 3 18 : The location of licenced gambling premises against Index of Multiple Deprivation (IMD) deciles. This is an official measure of relative deprivation in England and is part of a suite of outputs that form the Indices of Deprivation The density of licenced gambling premises The aggregate prevalence of disordered gambling associated characteristics All data management and analysis were conducted in Microsoft Excel. Licenced premises location We used Gambling Commission data (5 January 2022) to identify premises with a licencing authority named as Boston Borough Council, City of Lincoln Council, East Lindsey District Council, North Kesteven District Council, South Holland District Council, South Kesteven District Council or West Lindsey District Council. 19 A map pin was placed at the premises’ postcode by importing the postcode data of the premises from Microsoft Excel and assigning a colour corresponding to premises type as reported by the Gambling Commission. 20 The IMD decile at Lower Super Output Area (LSOA) level was sourced from www.gov.uk and used as background heat mapping. Rural, urban and coastal classification All LSOAs were assigned a classification of ‘rural’, ‘urban’ and ‘coastal’. Rural and urban LSOA were assigned their classification by using the Rural Urban (2011) Classification of LSOA , 21 produced by Open Geography Portal. Coastal LSOA were assigned their classification by isolating LSOA within areas classified as ‘Coastal Towns’ by the ONS Coastal Towns in England and Wales dataset. 22 The rate of licenced premises per 1000 population by each classification was calculated using their respective ONS mid-2020 population estimates. 23 Density of licenced premises Premises postcodes were matched to their respective ward or LSOA area. The rate of licenced premises per 1000 population was calculated using their respective ONS mid-2020 population estimates to identify areas where the number of premises per head of population were greatest, which were then visualized onto choropleth maps. Disordered gambling associated characteristics We accessed Public Health England Fingertips data 24 for variables predictive of disordered gambling in adults, at the lowest geography and most recent year available: IMD 2019 (ward) Unemployment (percentage of 16–64-year-olds claiming out of work benefit) 2019–20 (ward) Percentage prevalence of current smoking in adults (Annual Population Survey) 2019 (district) Admission episodes for alcohol-related conditions (Broad—primary and secondary diagnoses): directly standardized rate per 100 000 population 2019/20 (district) Estimated percentage prevalence of common mental health disorders (any type of depression or anxiety) in people aged 16 or over 2017 (district) We hypothesize that areas with a high level of the above factors in combination could indicate greater risk of gambling harm. For each variable, the wards or districts (as relevant to the variable) were ordered from the highest prevalence, rate or IMD score as appropriate to lowest. The areas were ranked according to this order, where the highest prevalence indicated a rank of 1, the next most a rank of 2 and so on. Where data were only available at district level, this was transformed into a ward level rank by assigning a mid-septile value (of 158 wards) to all wards in the corresponding district i.e. wards in the most prevalent district would be assigned Rank 11, wards in the next most are assigned 33 and so on. This avoided district level ranking unfairly influencing the aggregate rank. The rank scores for each ward were summed, and then wards were ordered from the lowest score (highest risk) to highest (lowest risk). The aggregate risk rank score was used to produce a heat map at ward level.
Results Lincolnshire is a large county in England with the greatest population count in urban areas, followed by rural and then coastal ( Table 1 ). There was an overall county rate of 0.17 premises per 1000 people, with a total of 132 premises ( Table 2 ). Coastal areas had the highest density of gambling premises (1.7 per 1000 population) and highest absolute number of premises, followed by urban areas (0.14 per 1000 population and 54 premises). Rural areas had the lowest density (0.03 per 1000 population). Licenced gambling premises are scattered across the county ( Fig. 1 ). Though not exclusive to areas of deprivation, higher concentrations and clustering of these premises were evident, particularly adult gaming and betting shops, in coastal and urban areas. There is a higher density of gambling premises in urban (e.g. Lincoln and Grantham) and coastal areas (e.g. Mablethorpe and Skegness). Ingoldmells (coastal) has a notably higher rate of premises per population than other wards. Figure 2 shows the highest aggregate prevalence of disordered gambling associated characteristics in Lincoln, Boston and the east coast. North and South Kesteven generally observe the lowest combined prevalence of risk factors for disordered gambling. When compared with Figure 1A , there is an overlap between the prevalence of disordered gambling characteristics and the location of gambling premises.
Discussion Main findings of this study The results confirm and enhance findings from earlier studies, which have correlated disordered gambling, deprivation factors and the presence of licenced gambling premises. Coastal areas had the highest number of gambling premises despite being home to only 5.2% of the county population. There was a noticeable difference in rate of premises between area types, with no clear association between the number of premises and population size. Previous studies had used various mapping techniques to correlate numbers of licenced premises with indices of deprivation, generally in city-based urban areas. 8 , 15 This study adds new knowledge in relation to coastal areas and smaller towns within a more rural area of England. We have found that gambling premises are clustered in coastal and urban areas, and an overlap between the location of gambling premises and characteristics which predict disordered gambling prevalence. These findings can be applied in developing a public health approach to gambling-related harm in Lincolnshire, taking into account differences in access and exposure to in-person gambling premises, and population risk characteristics. What is already known on this topic It is known that there is a higher density of gambling premises in urban areas. Earlier research used mapping techniques to examine gambling premises and disordered gambling-related characteristics in two urban cities. 16 We have built on this to analyse findings at a county level, including urban, rural and coastal areas. GambleAware has also produced maps to predict areas of gambling-related harm. 15 Our findings also use more recent data and include small area analyses below Upper Tier Local Authority level, which better translates into intelligence led public health practice. The mapping for this study also found an unusually high level of premises in coastal areas. There are historical and contemporary explanations, with the popularity of seaside gambling arcades in UK cultural life, and the conversion to gambling of many premises that became available cheaply after the decline of other cultural activities such as theatre going in seaside locations. 25 , 26 The Chief Medical Officer’s 2021 report highlighted coastal population health as one of the most enduring health challenges in the UK 17 , and there is a significant risk this will exacerbate health inequalities. The UK remains unusual in the global context in the way that we allow children access to gambling opportunities. 27 Whilst individuals under 18 are generally not allowed to gamble, an exception was made in the Gambling Act 2005 for category D gaming machines. There is a history of widespread availability of and participation in gambling in arcades in seaside towns such as Skegness, by children and families. These arcades contain many of what are now category D machines—these include low-stake fruit machine style machines, coin pushers (sometimes called penny falls) or crane grabs. The machines are found widely in what are known as family entertainment centres, adult gaming centres and pubs, with smaller numbers in other venues such as members clubs, betting shops and casinos. Emerging evidence clearly connects childhood gambling activities in seaside arcades with disordered gambling as an adult. Newall et al. 28 investigated the links between legal underage gambling (i.e. Category D machines) and disordered gambling symptoms in adults. They questioned over 1000 UK gamblers between the ages of 18 and 40 about their experiences with Category D slot machines, the National Lottery, National Lottery scratchcards, coin push machines and claw grabber machines, all legally available to people under the age of 18. Over 50% of those questioned had interacted with all aforementioned gambling products. Having played a legal gambling product had no association with gambling disorders; however, ‘more frequent use of each of the five products was associated with an increased risk of disordered adult gambling’. Essentially, the more those questioned played with legal forms of gambling under the age of 18, the more likely they were to develop gambling disorders later in life, with claw machines carrying the greatest risk. The frequency of gambling play as children was robustly associated with adult disordered gambling. What this study adds We confirm clear links between numbers of gambling premises, deprivation and certain risk factors for disordered gambling. Whilst unable to directly explore the correlations, they suggest that different place-based factors, including availability of gambling to children in seaside arcades (Ingoldmells) or low levels of education and significant migrant communities in the local population (Boston), may be important to consider in relation to disordered gambling. Some of the coastal, rural and urban issues are significant but not unique to Lincolnshire. This adds to a small but growing literature about what have been called ‘gamblogenic’ environments. 29 As well as the coastal hotspots, the mapping showed Boston as an area with increased prevalence of characteristics associated with disordered gambling, such as the directly standardized rate of admission episodes for alcohol-related conditions, and the estimated percentage prevalence of common mental health disorders in people aged 16 or over. It should be noted that, as well as the characteristics used for the mapping exercise, Boston is known for having a particularly high proportion of migrants in the population, with the 2021 census showing 23.6% of the current population born outside of the UK (compared to the UK average of 16%). 30 It is known from studies in a number of countries that migrants may be more vulnerable to gambling harm, and that there is a harm paradox, with lower participation rates, but a disproportionate level of gambling-related harm among those who do participate. 31 Our findings can be applied in targeting interventions to specific populations at the greatest risk of gambling associated harm, and advocating for integration of gambling harm prevention into work addressing associated risk factors (e.g. mental health or smoking cessation services). The relationship between these factors is however complex and areas of the greatest risk do not necessarily mirror those of the greatest deprivation. 15 We have highlighted significant issues in coastal and rural areas, and this provides lessons for similar geographies across the country. Our findings may help to inform local licencing policy, and be used to monitor changes to premises density and risk in Lincolnshire over time. We have found no prior example of local authority public health teams publishing the use of mapping techniques to examine the local distribution and density of gambling premises in comparison with population risk characteristics. We offer a methodology that other public health teams could apply in practice, and use findings to inform and develop a local approach to gambling-related harm. Limitations of this study To produce the findings, a set of proxies that correlate with disordered gambling had to be used in the absence of records of actual disordered gambling at any local level. This highlights a limitation of existing gambling data. Both surveys of prevalence, and data about help seeking, provide some data about demographics but provide only national data, and no available data about the location of those who respond. To properly understand correlations between any place-based factors and disordered gambling, there is an urgent need for better data at local level about the prevalence of disordered gambling and the nature and use of help seeking. The absence of data and knowledge is particularly obvious in relation to rural areas of the UK. 32 Most gambling-related problems and harm now relate to online gambling. 33 With 94% of UK adults having access to the internet in 2021, it is not surprising that there has been a gradual switch from people gambling in-person to choosing to gamble online. GamCare has seen a trend over recent years and now reports more people seeking help for online gambling than for offline gambling. Online gambling increased in 2020/21 due to COVID-19 and has continued post-lockdown. Research by the Gambling Commission shows that participation rates in online gambling have been steadily increasing year-on-year for the past 4 years, with 27% of UK adults gambling online in some form in September 2022, compared with 18.4% in September 2018. 34 GamCare has seen a trend over a number of years and now reports more people seeking help for online gambling than for offline gambling. In 2021–22, 63% of treatment users were reporting online gambling as their principal form. 30 It is unknown what impact this is having on rural communities. Individuals living in such communities would have had very limited access to gambling opportunities in the past, and now have gambling accessible all of the time via smartphones and other devices. We were not able to include online gambling exposure, which would improve understanding of geographical exposure to the products most likely to inflict harm and offer insight to inform targeted work for people experiencing disordered gambling. Routine reporting of remote gambling participation would enhance and facilitate future research. We have shown important findings for Lincolnshire, and our findings illustrate an issue that needs further study with a larger mapping exercise of other geographies and coastal towns in England. Recommendations This study adds to the growing literature about the importance of considering place-based factors in the development and continuation of behaviours such as disordered gambling. There is a need for more data about the location and residence of gamblers, and those experiencing disordered gambling, in order to better understand which place-based factors relate to deprivation, availability and convenience of different kinds of gambling opportunities, and a range of social and cultural factors. A significant gap remains in our knowledge of online gambling, and gambling in rural areas. Studies are needed to specifically recruit gamblers in rural areas, and future studies of online gambling should include granular local data about the geographical location of participants.
Conclusion Licenced gambling premises were most clustered in urban and coastal areas of a large English county, and with correlation to areas of deprivation and population characteristics predictive of gambling harm. Our study provides data, which may help to target scarce public health resources to where they are most needed, and a methodology for public health teams to apply in developing a local approach to gambling-related harm.
Abstract Background Disordered gambling is a public health problem with interconnections with health and social inequality, and adverse impacts on physical and mental health. Mapping technologies have been used to explore gambling in the UK, though most were based in urban locations. Methods We used routine data sources and geospatial mapping software to predict where gambling related harm would be most prevalent within a large English county, host to urban, rural and coastal communities. Results Licensed gambling premises were most concentrated in areas of deprivation, and in urban and coastal areas. The aggregate prevalence of disordered gambling associated characteristics was also greatest in these areas. Conclusions This mapping study links the number of gambling premises, deprivation, and risk factors for disordered gambling, and highlights that coastal areas see particularly high density of gambling premises. Findings can be applied to target resources to where they are most needed.
Conflict of interest None. Funding No funding was required. Data availability The datasets were derived from sources in the public domain: Gambling Commission. Register of gambling premises [Internet]. 2022. Available from: https://www.gamblingcommission.gov.uk/public-register/premises . Office for Health Improvement and Disparities. Fingertips Public Health Data [Internet]. 2022. Available from: https://fingertips.phe.org.uk/ . Ethics This was service work using publicly available routine data sources and therefore ethical approval was not sought. M. Saunders, Specialty Registrar in Public Health J. Rogers, Senior Lecturer in Social Science A. Roberts, Professor of Psychology L. Gavens, Consultant in Public Health P. Huntley, Head of Health Intelligence S. Midgley, Public Health Analyst
CC BY
no
2024-01-16 23:43:51
J Public Health (Oxf). 2023 Jun 30; 45(4):847-853
oa_package/15/52/PMC10788837.tar.gz
PMC10788838
37099756
Introduction Smoking in pregnancy is associated with an increased risk of adverse outcomes, such as stillbirth, preterm birth and low birth weight, 1 , 2 and smoking cessation can reduce those risks to a level almost comparable to non-smokers. 3 In England, 9.7% of women were smoking at the time of delivery in 2021. 4 Although this represents a decrease from about 14% in 2011, it is markedly short of the target set by the Department of Health of 6% or lower to be achieved by 2022. 5 Considering current trends, it may take a further 10 years from 2021 to achieve that target. In addition, among those who were smokers at the time they attended their first midwife appointment, only 36% stopped smoking in pregnancy. 4 This illustrates the societal and personal difficulties surrounding smoking cessation and emphasizes the importance of supporting pregnant women in quitting smoking, as they are likely to be more receptive and motivated during pregnancy due to the perceived benefits for them and their babies. 6 Although London has the lowest smoking prevalence among pregnant women in England (about 4–5%), 4 we hypothesized that the low overall prevalence could mask important inequalities according to ethnicity and deprivation with implications for smoking cessation service delivery. Therefore, this study aimed to investigate the prevalence of smoking among pregnant women in North West London stratified by ethnicity and deprivation.
Methods Data were obtained from electronic health records collected by maternity services between January 2020 and August 2022 at Imperial College Healthcare NHS Trust, which serves the population of North West London. Data were extracted for demographic variables and smoking at the first time of contact with maternity services. Smoking status was self-reported and validated with measurement of exhaled carbon monoxide. We calculated the prevalence of smoking stratified by deprivation and ethnicity. Deprivation was categorized into fifths of the Index of Multiple Deprivation (IMD) for the postcode of residence. Ethnicity was self-reported according to the pre-defined categories available on the electronic health records system. Both variables were collected at the time of first contact with maternity services. This study was approved by the Yorkshire & The Humber—South Yorkshire Research Ethics Committee, reference 19/YH/0435.
Results A total of 25 231 women were included in this study, with a mean age of 32 years ( Table 1 ). The largest ethnic group was any other White background (26%), followed by White British (16%) and any other Asian background (13%). Pregnant women were distributed across the entire range of the IMD fifths, with 21% in the most deprived fifth and 34% the second least deprived fifth. At the time of booking of antenatal care (mean of 12 weeks), 4% of the women were current smokers, 17% were ex-smokers and 78% never smokers. There were marked differences in the smoking prevalence between ethnic groups ( Fig. 1 ). For instance, the prevalence of smoking at booking for antenatal care was 12% for women of White and Black Caribbean ethnicity and 9% for White Irish women. There was also a stark deprivation gradient with an over 4-fold increase in the prevalence of smoking between the most and the least deprived IMD fifths.
Discussion Main findings This study demonstrated that in a population of pregnant women living in North West London, with a broad distribution of deprivation and ethnic diversity, there are important differences in the prevalence of smoking at the time of booking antenatal care. These illustrate how an overall low local prevalence of smoking (4%) can hide inequalities between socioeconomic and ethnic groups, with a prevalence of smoking 3- to 4-fold above the average and similar to the highest national values in certain groups. What is already known on this topic The importance of smoking cessation during pregnancy has been highlighted by recent guidelines on tobacco smoking published by the National Institute for Health and Care Excellence (NICE) in England, which recommend using financial incentives (i.e. vouchers) to encourage smoking cessation during pregnancy in addition to nicotine replacement therapy and behavioural support. 7 , 8 These guidelines also recommended e-cigarettes to support smoking cessation for adults but not pregnant women as evidence is lacking on their efficacy and safety in pregnancy. 9 Since these guidelines were published, a large UK-based trial demonstrated that financial vouchers (i.e. LoveToShop shopping vouchers redeemable in many retail outlets) reduced by almost 3-fold the odds of smoking in pregnancy, even if most women relapsed after giving birth. 10 Pregnant women experiencing deprivation have an increased risk of adverse pregnancy and birth outcomes due to a constellation of risk factors. 11 Unfortunately, they are also the most likely to continue smoking in pregnancy. 4 Therefore, they have the most to benefit from smoking cessation and they are likely to be more receptive to financial incentives. 12 Despite compelling evidence on their cost-effectiveness and NICE recommendation, implementation of financial vouchers as incentives to smoking cessation in pregnancy in England remains patchy. For instance, although in Greater Manchester they have been routinely offered since 2018, in Greater London they are not yet available. Implementation of NICE recommendations consistently across the country may help addressing inequalities in smoking in pregnancy. What this study adds Our findings of stark inequalities illustrate the importance of disaggregating data by ethnicity and socioeconomic group. The markedly higher prevalence of smoking among women living in the most deprived areas in comparison with those living in the most affluent areas is concerning because women experiencing deprivation are more likely to have other risk factors for adverse pregnancy outcomes, such as obesity, cardiometabolic diseases and poor mental health. 13–15 Smoking further elevates the risk of adverse birth outcomes, such as preterm birth and intrauterine growth restriction, which have adverse and lifelong consequences for the offspring. 16 Children growing in deprived neighbourhoods are also exposed to multiple risk factors for poor health, such as lack of green space, poor living conditions and food poverty. 17 Therefore, reducing the avoidable detrimental consequences of smoking for mothers and their offspring, particularly those experiencing deprivation, is crucial. Limitations of this study This study has some limitations. First, it relied on routinely collected electronic health records rather than on data collected purposefully for research. This meant that a small fraction of data was missing (e.g. 10% for ethnicity and 1% for smoking). Second, our study population is specific to North West London and findings may not be generalizable elsewhere. Third, there is a substantial overlap between ethnic minorities and deprivation. However, we found very different prevalence of smoking between the ethnic groups experiencing similar levels of deprivation. Fourth, we did not have data on smoking status at the time of birth to investigate inequalities in smoking cessation during pregnancy.
Conclusion There are marked inequalities in smoking based on ethnic background and deprivation among pregnant women in North West London. Financial incentives, as recommended by NICE guidelines, may improve smoking cessation during pregnancy and reduce inequalities even in areas with low overall smoking prevalence in pregnancy, preventing the detrimental and lifelong consequences of smoking for the offspring.
Abstract Background London has the lowest smoking prevalence among pregnant women in England. However, it was unclear whether the low overall prevalence masked inequalities. This study investigated the prevalence of smoking among pregnant women in North West London stratified by ethnicity and deprivation. Methods Data regarding smoking status, ethnicity and deprivation were extracted from electronic health records collected by maternity services at Imperial Healthcare NHS Trust between January 2020 and August 2022. Results A total of 25 231 women were included in this study. At the time of booking of antenatal care (mean of 12 weeks), 4% of women were current smokers, 17% were ex-smokers and 78% never smokers. There were marked differences in the smoking prevalence between ethnic groups. Women of Mixed—White and Black Caribbean ethnicity and White Irish women had the highest prevalence of smoking (12 and 9%, respectively). There was an over 4-fold increase in the prevalence of smoking between the most and the least deprived groups (5.6 versus 1.3%). Conclusions Even in a population with an overall low prevalence of smoking in pregnancy, women experiencing deprivation and from certain ethnic backgrounds have a high smoking prevalence and hence are the most likely to benefit from smoking cessation interventions.
Funding ACPG is funded by a Clinical Lectureship in Public Health Medicine from the National Institute of Health Research in the UK. Conflict of interest There are no conflicts of interest to declare for any of the authors. Data Availability All data are available upon request from the corresponding author. Ana-Catarina Pinho-Gomes , NIHR Clinical Lecturer in Public Health Medicine Edward Mullins , Clinical Senior Lecturer and Honorary Consultant Obstetrician
CC BY
no
2024-01-16 23:43:51
J Public Health (Oxf). 2023 Apr 23; 45(3):e518-e521
oa_package/a8/1e/PMC10788838.tar.gz
PMC10788839
37144428
Introduction Homelessness is typically viewed as an issue relating to housing and social care, but there is increasing evidence it is also a public health issue. 1 People who are homeless are considered to be one of the populations experiencing the poorest health in society, frequently suffering from a trimorbidity of physical health, mental health and substance misuse issues. 2 The 2022 Homeless Health Needs Audit revealed that 77% of respondents reported a physical health condition, 82% a mental health diagnosis and 38 and 29% reporting drug and alcohol addiction, respectively. 1 The average age of death amongst people who are homeless in England and Wales between 2013 and 2019 was 43 years in women and 46 years in men, significantly lower than the general population (81 and 76 years, respectively). 3 The NHS constitution states that NHS care is to be provided to all based on clinical need irrespective of an individual’s background. 4 The homeless population face extreme health inequalities in both health outcomes and access to healthcare. 5 In accordance to the inverse care law, people who are homeless are 40 times more likely to be unregistered with a GP than a housed member of the population. 2 Use of A&E is often high amongst individuals who are homeless and a study in Birmingham revealed their rate of A&E attendances to be almost 60 times that of the general population. 6 The study described in this paper was conducted in Gateshead, a metropolitan borough in the North East of England with a population of ~202 500. It is the 47th most deprived of the 317 local authority areas in England. Gateshead has the second highest rate of homelessness in the North East, with 145 people estimated to be rough sleeping or living in temporary accommodation in 2019. 7 Local audits within Gateshead highlighted issues accessing healthcare amongst people who are homeless, with high proportions of individuals wanting to access GP, mental health and addiction services reportingly finding it impossible or difficult to do so (70, 89 and 67%, respectively). 8 Also, 66% of individuals presenting to the local drop-in centre self-reported not being registered with a GP, whereas 79% reported having accessed A&E in the past year. 9 At the time of conducting this study, there was no specialist general healthcare provision for individuals who are homeless within Gateshead. This qualitative study aimed to explore access to healthcare for individuals who are homeless in Gateshead, with an emphasis on what good provision would look like.
Method Methodology This study was influenced by an appreciative inquiry (AI) approach, which is a methodology incorporating action research and organisational change. 10 AI focuses on strengths and positive experiences to facilitate motivation, change and development, and is therefore well suited to topics where challenges and negative experiences are well established. 11 The four stages of AI are presented in Fig. 1 . Only the first two, discovery and dream, were completed in this study because of time constraints and restrictions imposed by the COVID-19 pandemic. Sampling and recruitment Purposive sampling was used to identify participants with insights relevant to the research aim. 12 Inclusion criteria were: having experience working directly or indirectly with the local homeless community, being able to communicate in English and being aged 18 or over. Participants were identified through the lead researcher’s professional networks as a public health registrar in Gateshead. A sample size of 10–15 participants was considered to be feasible and sufficient in terms of reaching data saturation. Twenty-two individuals were invited by e-mail to participate in the study. Fifteen responded and 12 interviews were conducted (following difficulties in contacting or arranging interviews with the other three). Data collection Data were collected using semi-structured interviews to allow the necessary topics to be discussed, but with scope for variation between interviews. 13 A topic guide was developed based on existing literature and focused on generalised access for primary care, A&E, mental health and drug and alcohol services. It included probes on existing barriers and facilitators, as well as envisioning feasible changes to improve access. The questions utilised an appreciative mindset, which attempts to understand what we need more of rather than what we want less of. 11 Data collection occurred during July and August of 2020 and because of the COVID-19 restrictions during this time interviews were conducted over Microsoft Teams or Zoom. Zoom provided an automated transcription, which was used as the starting point for transcription, whereas Microsoft Teams interviews required full transcription. Analysis Data were analysed using thematic analysis employing the one-sheet-of-paper (OSOP) method, which involves mapping emerging codes, themes and categories on a single piece of paper. 14 Coding was performed by the lead researcher in discussion with co-authors and began during the transcription process. Following the use of OSOP, codes and potential themes were written on Post-It Notes and arranged visually. Emerging themes were then entered into Microsoft Excel to further review, rearrange, name and define themes and sub-themes, which were discussed with other team members. As a form of respondent validation, the themes were circulated to all participants to ensure the researchers’ interpretations reflected the local situation. No participants raised issues with the analysis. Ethics Ethical approval was granted by Newcastle University Faculty of Medical Sciences Research Ethics Committee (Ref. 4118/2020). Participants received an information sheet and provided written consent via an online form.
Results Eight participants worked in the voluntary sector (five support workers, one manager and two in combined roles), three worked for the local authority and one worked in primary care management. No demographic data were collected but eight of the 12 presented as female. This paper focuses on the identified themes relating to the category of ‘what does good look like’. These were: facilitate primary care registration, training, joined-up working, utilising the voluntary sector, specialised roles and bespoke offer. The analysis also identified a number of themes that were categorised as barriers and facilitators to accessing healthcare, which are not reported here. Facilitate primary care registration Access to general practice for people experiencing homelessness was considered vital, in terms of encouraging appropriate use of both primary care and acute services. Participants described difficulties encountered by individuals who are homeless while attempting to register with a GP; these often related to issues around lack of proof of address or identification. Options to facilitate registration include using the third sector drop-in centre as a ‘care of’ address and educating practices to allow registration of individuals despite a lack of identification or proof of address, as per NHS guidelines. Training A greater understanding of homelessness and knowledge of multiple complex needs amongst healthcare workers were perceived to facilitate access to healthcare and improve attitudes towards these patients. Participants felt the care provided to individuals who are homeless should take a holistic approach and be trauma-informed, ‘rather than just knowing the medical bit’ (Participant 11). There was felt to be a particular lack of understanding of dual diagnosis and lack of recognition that drugs and alcohol are frequently used as a form of self-medication for mental health issues and/or trauma. Improving this understanding could facilitate access to mental health services for individuals with co-existing substance abuse issues. Joined-up working Related to the previous theme, participants perceived joined-up working to be important in improving access to healthcare because of overlap in the needs of service users. There was recognition of good work within different areas of healthcare but it was felt that greater communication between teams and services would improve care. Participants described how individuals often have to repeat themselves and are bounced between services. Embrace the voluntary sector Many of the participants were support workers and described strong and trusting relationships with their service users. The support workers were dedicated to being a safe haven for individuals who are homeless and persevered to maintain relationships regardless of how difficult this could be at times. Participants believed there was the potential for the support worker role to help facilitate healthcare access. They felt their role allowed the time and flexibility to improve access and that they could advocate for their service users and persist with disengaged individuals. Recognising support workers as integral parts of multidisciplinary teams could help to improve support and advocacy for people experiencing homelessness. However, support workers felt their concerns were often dismissed because they are not health professionals and that their close relationship and understanding of the individual who is homeless is often overlooked. Specialised roles There were a range of healthcare worker roles suggested by participants which might improve access to healthcare; for example, a nurse or GP who is ‘more aware of [...] mental health and drug and alcohol use and any of chaotic behaviour’ (Participant 5). Several participants identified the need for a specialised mental health worker such as a community psychiatric nurse, psychotherapist or counsellor, as access to mental health services was identified as a particular concern. They also highlighted the benefit of a link worker or navigator type role, which could help to promote joined-up working by linking in with other services. Bespoke offer There was a general perception that mainstream healthcare is ‘shaped for people that are not homeless people [...] people that can walk and run about their appointments’ (Participant 6) . A different approach was felt to be needed to improve access for the homeless population. There was also a desire for a holistic, bespoke approach in which services run alongside one another to allow the multiple and complex needs of individuals experiencing homelessness to be addressed in one place. Participants described how their service users’ behaviour could be challenging in traditional healthcare settings, likely because of previous negative experiences. Their behaviour was perceived to improve in settings in which they were comfortable. Locating healthcare in a familiar setting could therefore improve access and engagement and allow interventions to be more opportunistic. While there was a request for specialised provision, it was recognised the goal was to reintegrate individuals who are homeless back into mainstream services in the long term.
Discussion Main finding of this study This study identified how access to healthcare could be improved for individuals who are homeless. Even though the study setting was a single metropolitan area in the North East of England, the applicability of the findings is not limited to this location. The main findings were that access to healthcare could be improved by: facilitating primary care registration, training, joined-up working, embracing the voluntary sector, specialised roles and a bespoke offer. The findings suggest that improving accessibility of services for people who are homeless can be achieved by a combination of breaking down barriers and drawing upon facilitators. Access to healthcare is a complex issue that requires a multifaceted approach. 15 What is already known on this topic The focus of existing literature is on barriers to accessing healthcare at an individual level, such as self-esteem, 16 complex needs 17 and skills 18 ; at provider level, such as stigma 19 and lack of understanding 20 ; and at healthcare system levels, such as registration issues, 21 appointment systems 16 and duration. 18 Acknowledged facilitators to access include drop-in appointments, 17 a multidisciplinary team approach, 19 specialist primary care services 17 and outreach programmes. 22 There is increasing support for a specialised healthcare service for individuals who are homeless 2 , 23 ; a study conducted in the UK identified 84% of individuals who were homeless preferred seeking specialised services over mainstream services. 24 However, previous studies have not specifically addressed the potential contribution of the voluntary sector in relation to this specialist provision. What this study adds There remains a paucity of literature on this subject 5 , 21 and two specific gaps in the evidence shaped this study. First, no studies have been conducted on this topic exclusively in North East of England, as the focus is on areas with larger homeless populations. Consequently, the recommendations may not be appropriate, feasible or cost-effective in areas with smaller populations. As this study focuses on a local authority with a smaller homeless population; the findings may be transferable to areas with similar populations and no existing specialist homeless healthcare provision. Second, there is limited research involving staff outside the health service (e.g. in local government or the voluntary sector), despite the major role they often have in supporting people experiencing homelessness. Many of the findings of this study related to ways to enhance current services that would not require any major restructuring. Therefore, it is likely the resources necessary to implement these findings would be minimal. The first stage involved facilitating GP registration as there were still instances of individuals being denied registration because of lack of identification and/or proof of address. Improving GP registration is crucial in enabling individuals who are homeless to access primary care and training could help disseminate the message that registration cannot be refused on the grounds of lack of identification. 25 There are existing resources nationally that aim to combat issues around registration, for example, ‘My Right to Healthcare’ cards produced by Groundswell. 26 Training and education could also improve knowledge and understanding of caring for patients who are homeless, particularly in terms of trauma-informed care that features in the NHS Long Term Plan. 27 Education could also focus on holistic care as mental health, substance misuse and homelessness are not separate issues. 21 Training would potentially decrease stigma towards individuals who are homeless and ideally a degree of flexibility would be offered for these patients. Organisations such as FairHealth 28 and Pathway 29 provide training, which could be utilised by healthcare providers. Training could incorporate national recommendations such as that from the Care Quality Commission who suggest double appointments and a named GP for individuals who are homeless. 25 A key finding of this study was the shared belief that if existing services worked together more efficiently then individuals who are homeless would receive better care as often individuals are bounced between services. This finding supports previous studies, which emphasised the positive role of multi-agency working. 16 , 19 , 22 Issues such as time constraints and work pressures of staff were acknowledged as barriers to joined-up working. It was felt that the current lack of joined-up working was a result of a broken healthcare system rather than the individual staff members. Attempting to change pathways within a system can be challenging as ways of working are often embedded in organisations. 30 Finally, a key strength of this study is the involvement of participants from the voluntary sector. These participants offered a unique stance as they wanted to advocate for their service users but were also able to reflect upon the challenges they can present to healthcare settings. Support workers often have a strong and trusting relationship with individuals who are homeless and could help with the practicalities and logistics of seeking healthcare but could also advocate for their service users. Support workers felt passionate about the health of their service users, however, felt their opinion and experience was often dismissed by healthcare providers. The Department of Health and Social Care highlights the potential role of the voluntary sector as strategic partners in addressing heath inequalities. 31 However, it is important to note not all individuals who are homeless have contact with a support worker. The findings did also highlight the potential for specialised roles and a bespoke offer of healthcare. However, this would likely require significant organisation and investment. This bespoke healthcare system was most often described as an opportunistic, drop-in arrangement at a location already visited by individuals who are homeless. There was also a desire for specialised roles working specifically with the homeless community including suggestions for a specialist GP or nurse, a mental health role and a link worker/navigator. Limitations of this study The findings need to be interpreted mindful that they are perceptions of staff supporting the homeless community rather than those with first-hand experience of homelessness. Further work is needed to ensure that the finding of this study is reflective of individuals experiencing homelessness. Furthermore, there will be individuals, potentially the most entrenched rough sleepers or the hidden homeless, who are not accessing any services or support; it is likely their healthcare needs are even greater and the barriers even more challenging than those captured in this study. A further limitation is that the latter two stages of the AI were not completed and therefore for recommendations of this study to be implemented an assessment of feasibility and cost-effectiveness would be required.
Conclusion Access to healthcare for people who are homeless is a complicated issue and there is no one solution. This study proposes actions that could improve access and many of the recommendations could be implemented without delay as they would have minimal cost and would not require any major restructuring of existing services. Further work is needed in collaboration with local stakeholders, including local authority, healthcare, voluntary sector and members of the homeless population, to complete the AI and understand the feasibility and cost-effectiveness of some of the more complex interventions suggested in this study. The NHS constitution reports a social duty to promote equality and ‘to pay particular attention to groups or sections of society where improvements in health and life expectancy are not keeping pace with the rest of the population’ (4, p3). Individuals who are homeless face extreme health inequalities 5 and there is both a moral responsibility and legal duty; as set out by the 2012 Health and Social Care Act, 2 to address this. This study comes at a time where the coronavirus pandemic has increased the momentum and desire for action in addressing homelessness nationally. Tackling health inequalities associated with homelessness is a colossal task but working together to improve healthcare access is a starting point on that journey.
Abstract Background individuals who are homeless encounter extreme health inequalities and as a result often suffer poor health. This study aims to explore ways in which access to healthcare could be improved for individuals who are homeless in Gateshead, UK. Methods twelve semi-structured interviews were conducted with people working with the homeless community in a non-clinical setting. Transcripts were analysed using thematic analysis. Results six themes were identified under the broad category of ‘what does good look like’, in terms of improving access to healthcare. These were: facilitating GP registration; training to reduce stigma and to provide more holistic care; joined-up working in which existing services communicate rather than work in isolation; utilising the voluntary sector as support workers could actively support access to healthcare and provide advocacy; specialised roles such as specialised clinicians, mental health workers or link workers; and specialised bespoke services for the homeless community. Conclusions the study revealed issues locally for the homeless community accessing healthcare. Many of the proposed actions to facilitate access to healthcare involved building upon good practice and enhancing existing services. The feasibility and cost-effectiveness of interventions suggested requires further assessment.
Conflicts of interest At the time of conducing the research, the lead author (SP) was working in Gateshead Council as a public health registrar. Three study participants were employed by the same organisation but were not previously known to SP. The co-authors (SV and LL) have no conflicts of interest to declare. Funding None. Data availability Data cannot be shared for ethical/privacy reasons. Code availability Not applicable. Authors’ contributions All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by SP. The first draft of the manuscript was written by SP and all authors commented on subsequent versions of the manuscript. All authors read and approved the final manuscript. Ethics approval Approval for the study was granted by the Faculty of Medical Sciences Research Ethics Committee at Newcastle University (Ref. 4118/2020). Consent to participate All participants gave their written informed consent via an online form to participate in the study. Consent for publication All participants gave written informed consent via an online form which included consent for their anonymised data to be used in publications arising from this study.
Sadie Perkin , Public Health Specialty Registrar Shelina Visram , Senior Lecturer Laura Lindsey , Lecturer
CC BY
no
2024-01-16 23:43:51
J Public Health (Oxf). 2023 Apr 12; 45(3):e486-e493
oa_package/4d/17/PMC10788839.tar.gz
PMC10788840
37477219
Introduction People are defined as sleeping rough if they sleep outside or without adequate shelter. 1 A higher prevalence of comorbidities, 2 and lower vaccination rates than the general population, 3–7 make people experiencing rough sleeping more vulnerable to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections, coronavirus disease 2019 (COVID-19) complications and other nonrespiratory infections such as tuberculosis. 8 In March 2020, the ‘everyone in initiative’ was launched to help people experiencing rough sleeping in England to comply with COVID-19 regulations. 9 To this aim, additional funding was made available for accommodation providers to provide more single occupancy accommodation or to restructure existing accommodations to make them COVID-19-safe (i.e. allow isolation). To help people experiencing rough sleeping to better adhere to personal protective measures, the UK government maintains that single-occupancy accommodation should be provided wherever possible. 10 However, in some situations, the need for temporary accommodation for people experiencing rough sleeping may exceed single occupancy availability, and communal facilities may still be needed. 10 , 11 Many of the communal accommodations for people who sleep rough in England typically have shared washing facilities, whilst other aspects such as kitchen use or sleeping arrangements may vary. 11 Thus, ‘communal accommodation’ for people experiencing rough sleeping within England can be defined as accommodations that have shared washing facilities (e.g. bathrooms). Communal accommodation increases the risk of COVID-19 outbreaks, 12 , 13 which are defined as two or more test-confirmed cases associated with a specific context within 14 days. 14 Hence, when communal accommodation is the only option, promotion of vaccinations, improved ventilation, mask-wearing, limiting close contact and frequent hand washing are recommended by the UK government. 10 However, during the pandemic, most communal accommodations in England were closed; 15 therefore, the effectiveness of these measures in communal accommodations is unclear. Understanding the efficacy of mitigation strategies in this setting is essential for effective future planning and implementation of guidance and policies intending to protect people experiencing rough sleeping against COVID-19 and other infections. This paper thus systematically reviews the international evidence on the effectiveness of measures to prevent the spread of SARS-CoV-2 infection in accommodations for people experiencing rough sleeping that have shared washing facilities.
Methods This systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines 16 and was registered with PROSPERO. Review protocol registration number: CRD42021270053. Search strategy, selection criteria and screening process Five electronic databases: MEDLINE, PubMed, Cochrane Library, CINAHL and the WHO COVID-19 Database were searched from database establishment to the 6 December 2022. By combining ‘COVID-19’, ‘transmission’ and ‘setting or population’ terms, searches were restricted to research that investigated SARS-CoV-2 transmission mitigation strategies in communal accommodations for people experiencing rough sleeping. All study designs that quantitatively assessed the effectiveness of measures to protect against SARS-CoV-2 infections and COVID-19 complications in people experiencing rough sleeping in communal accommodation were included. Shared washing facilities are a common feature of ‘communal accommodation’ for people experiencing rough sleeping in England. 13 Thus, only studies that evaluated mitigation measures in accommodations that met this definition were included. Although the structure of ‘communal accommodation’ may differ across countries this review aimed to evaluate the effectiveness of mitigation measures in settings similar to the provisions in England. Consequently, when evaluating the effectiveness of interventions in communal accommodation for people experiencing rough sleeping all other structural characteristics (e.g. sleeping arrangements) were considered as mitigation measures. We included modelling studies, surveillance reports, pilot studies and randomized control trials (RCTs), and excluded media articles, reviews and opinion papers. A detailed description of inclusion criteria, search strategies and terms is available in the registered PROSPERO protocol. The final search strategy did not deviate from the protocol. Database searching was accompanied by grey literature searches and hand searching of reference lists of identified studies and publication lists of known experts. Lead researchers of ongoing relevant RCTs were contacted for potential preliminary reports or findings by the principal investigator. Title, abstract and full-text screening of all identified records were conducted independently and in duplicate by two researchers with almost perfect agreement (95%; Cohen’s k = 0.88) and discrepancies were resolved through discussion. Risk of bias assessment and data extraction Two researchers independently assessed the risk of bias for each study using appraisal tools from the Joanna Briggs Institute (JBI). 17 Two reviewers independently and in duplicate rated each domain and computed an overall risk of bias rating for each study (low, moderate and high). Discrepancies were resolved through discussions with consensuses reached on all records. Due to the anticipated low number of available literature in this review, no predefined exclusion quality cut-off was used. Using a pre-defined and previously used 12 Excel spreadsheet, the following information was extracted for all retained records: the first author with publication year, study design, study setting/country, target population/sample, mitigation measures assessed, outcomes measured including date measured and method used and main findings. Data extraction was completed independently and in duplicate by two researchers and subsequently combined, and discrepancies were independently checked by a third researcher. Data analysis and synthesis Due to the heterogeneity in the mitigation strategies assessed, the conceptualization of mitigation approaches and how the outcomes were reported across studies, meta-analysis was not possible here and a narrative thematic analysis was conducted. The results were organized by intervention type.
Results Study characteristics We identified 883 records, including 186 duplicates through database searching. 697 unique abstracts were screened and assessed for eligibility by two researchers in duplicate, of which 650 were excluded. The remaining 47 records and the 18 identified from other sources (65 in total) were sought for full-text screening. 51 articles were excluded at this stage; 43 for not assessing the effectiveness of mitigation measures and another 8 because they did not meet our definition of communal accommodation for people experiencing rough sleeping. 18–25 The remaining 14 articles were included in this review ( Fig. 1 ). Studies were conducted in the United States (USA), England, France, Singapore and Canada. Two studies 26 , 27 were longitudinal. Ten studies were surveillance reports, 20 , 26–34 one was a pilot intervention report 35 three were modelling or simulation studies. 36–38 All studies began their surveillance between March and April 2020, capturing the peak of the first wave of the COVID-19 pandemic and before vaccine rollouts in their respective countries. 39–42 Homelessness was defined inconsistently across the included studies. Nine studies included samples that contained only staff and/or residents within shelters, hostels and hotels used as emergency accommodations for people experiencing homelessness. The other five studies included additional accommodation types (e.g. squats) and/or groups of people that are living in precarious conditions (e.g. migrant workers). See Table 1 , for a summary of included studies. Risk of bias The risk of bias was rated as low in three studies, 26 , 36 , 38 moderate in eight studies, 20 , 27 , 29–31 , 34 , 35 , 37 and high in three studies. 28 , 32 , 33 The risk of bias for each study is presented in Tables 2 and 3 . Study findings Individual interventions Eight cross-sectional studies report the effectiveness of individual interventions, including single occupancy rooms, resident mobility, physical distancing between residents and exclusion of symptomatic staff. 20 , 28–34 Despite reporting the efficacy of each intervention independently, not all mitigation measures were implemented in isolation (see Table 2 for more details). The effectiveness of sleeping in single occupancy rooms and restricting resident mobility is inconsistent. Two reports from France found that sleeping in communal rooms was not associated with a higher SARS-CoV-2 infection rate or risk of hospitalization compared with sleeping in a private room, 29 , 30 whereas in the USA sleeping in communal rooms was associated with higher rates of positivity compared with those sleeping in a private room. 33 Similarly, the effects of resident mobility varied. In a surveillance report from Canada, residents who remained in the same accommodation for 14 days were less likely to test positive. 20 Conversely, in a report from France changing accommodations showed no relationship with odds of being seropositive compared with not changing accommodations. 30 In France, spending less than a month in emergency shelters was associated with lower odds of being seropositive (compared with more than a month). 30 In a report of six US shelters, despite implementing multiple other infection control practices (see Table 2 for details) only accommodations that prohibited new residents reported no outbreaks. 32 Measures reducing contact between residents also showed conflicting results. A report in France found that limiting the frequency that residents spent more than 15 min within 1 m of other residents and staff was associated with lower odds of being seropositive. 31 The study reduced close contact by reducing the number of residents sharing sleeping, cooking and washing facilities. Compared with having 5 or fewer close contacts, more close contacts were associated with greater odds of testing positive for SARS-CoV-2 (6–9 close contacts: OR = 2.7, 95% CI = 1.5–5.1 and >10 close contacts: OR = 3.4, 95% CI = 1.7–6.9). This is consistent with a report from Singapore that suggested there were no reported outbreaks in homeless shelters that had increased bed spacing and staggered meal and shower times. 28 However, because no further information was available in these studies it is unclear precisely how physical distancing was defined in these contexts. On the other hand, a cross-sectional surveillance report of sixty-three shelters across seven states in the USA found that increasing bed spacing to 3 feet apart and filling fewer than 74% of beds was not associated with lower SARS-CoV-2 prevalence. 34 Instead, positioning beds head-to-toe and excluding symptomatic staff from working was associated with reduced odds of reporting SARS-CoV-2 prevalence above 2.9% (the median of the 7-day average of the six counties in the study). However, it is not clear whether these shelters implemented head-to-toe sleeping and/or excluded symptomatic staff from working individually or simultaneously with other measures. Combined measures A cross-sectional uncontrolled pilot study 35 and two longitudinal studies 26 , 27 looked at a combination of measures to reduce SARS-CoV-2 transmission risk. The strongest available evidence comes from a longitudinal report (with a low risk of bias) of four shelters (and associated hotels used for depopulation) in France that were assessed from March to November 2020. 26 In this report, reducing the density of communal accommodation, encouraging good hygiene practices, and increasing social distancing helped to reduce the SARS-CoV-2 infection rate and helped to keep infection rates lower during subsequent waves of SARS-CoV-2, but did not prevent infections entirely. The study reported a decline from 21% of infected residents during the first wave of SARS-CoV-2 (March 2020) to 7% in the middle of the second wave (September 2020), following the implementation of the suggested mitigation measures. 26 Similarly, a report from the USA revealed that increased accommodation cleaning, more frequent hand washing, universal mask-wearing, off-site isolation, daily symptom screening, PCR testing three times per week, and prohibiting residents who left the accommodation to return was found to reduce the SARS-CoV-2 infection rate from 45% of infected residents in April 2020 to less than 1% in May 2020. 27 A Canadian uncontrolled pilot study 35 suggests that by reducing shelter density, increasing social distancing, providing rapid on-site testing, universal mask-wearing and providing an isolation area for people awaiting test results the researchers were able to keep shelter SARS-CoV-2 infection rates at 1%, which was below that of the general population (estimated 5–7%), but could not entirely stop new infections within the shelter. Modelling studies Three modelling studies were selected: two simulating shelters in the USA (both rated as having a low risk of bias) 36 , 38 and in England 37 (summarized in Table 3 ). Both US studies estimated that daily symptom screening, relocation of some residents to hotels (to reduce shelter density), universal mask-wearing, off-site isolation and twice weekly PCR testing could help to reduce SARS-CoV-2 infections by between 62% 36 and 96% 38 when the reproduction number (R 0 ) is low (R 0 = 1.3 and 1.5, respectively). Under a low R 0 , it was estimated that together these mitigation measures had a 74% chance to avert an outbreak in the communal accommodation; however, as the community prevalence increases (to 2.9, 3.9 and 6.2), the effectiveness of these mitigation measures to prevent an outbreak declined to 42%, 29% and 19%, respectively. 38 The simulation in England also estimated that combined packages of interventions could help dramatically reduce infection rates by 45% and hospitalizations by 92% during the first wave of COVID-19 in the country, but during the second spike in community prevalence, whilst the measures were still in place the estimated effectiveness dropped to a reduction of 24% in SARS-CoV-2 infections and 89% fewer hospitalizations.
Discussion Main findings Existing evidence on the effectiveness of interventions to reduce SARS-CoV-2 transmission and COVID-19 complications among people experiencing homelessness in communal accommodation is weak, due to a reliance on cross-sectional study design and modelling studies as well as the risk of bias in study methodologies. However, the evidence shows that the implementation of multiple mitigation measures together can help reduce SARS-CoV-2 infections in communal accommodations, although not enough to stop all outbreaks. The pilot intervention in Canada 35 and the longitudinal surveillance report in France 26 provided the strongest evidence upon which to assess mitigation measures in this setting. Yet, many of the other studies lacked the critical information required to understand and assess the implemented interventions. Continued and better quality research into how to mitigate COVID-19 and other diseases in communal accommodation is needed, particularly taking into account how factors such as the prevalence of SARS-CoV-2 in the community can influence the effectiveness of mitigation measures. What is already known Communal accommodation is well recognized to increase the risk of transmission of SARS-CoV-2 12 and could accelerate the spread of other airborne pathogens, such as TB. 8 Severe complications from COVID-19 and TB are far more pronounced in vulnerable populations that have increased comorbidities and are under-vaccinated, such as people experiencing rough sleeping, migrant workers and refugees and asylum seekers. 2 , 43 , 44 Differing physical, social, economic and environmental factors increase the susceptibility to hazards for each of these populations, 45 which are further exacerbated by poor living conditions. 46 Thus, because it is precisely these populations that often reside in precarious housing or are living in overcrowded or communal accommodations (e.g. migrant processing centres, night shelters), 47 understanding how to protect these vulnerable populations from life-threatening diseases in communal accommodations is crucial. What this study adds This review adds to the previous literature by compiling the available international evidence to assess the effectiveness of COVID-19 mitigation strategies in communal accommodations for people experiencing rough sleeping. The findings from this review suggest that implementing multiple mitigation measures simultaneously, such as early identification and isolation of positive cases, reducing accommodation density, reducing close contacts and promoting better hygiene and mask-wearing, could under some circumstances help reduce SARS-CoV-2 transmission in communal settings. Similar mitigation measures have been shown to help reduce the spread of SARS-CoV-2 in schools 48 and shelters for asylum seekers 49 and other airborne transmissible conditions, like TB 50 and influenza. 51 However, this review also exposes the weakness of the available evidence concerning assessing the effectiveness of COVID-19 mitigation measures in communal accommodations for people experiencing rough sleeping. The literature is made up of mostly cross-sectional studies that were conducted during the first wave only and before vaccine rollouts in their respective locations. Because vaccine uptake is lower in people experiencing rough sleeping 6 and some evidence suggests that there is still transmission in communal settings following vaccination, 52 it is increasingly important to understand what interventions are effective at reducing transmission risks in communal accommodations for people experiencing rough sleeping. However, with most of the available evidence being cross-sectional and many with a high risk of bias, more high-quality research that allows causality to be determined is needed to help in identifying the measures that are the most effective. Furthermore, despite the recognition that good ventilation is likely to play a role in protecting people in communal accommodation against SARS-CoV-2, TB and Influenza, 10 , 50 , 51 no studies captured in this review assess ventilation as a mitigation measure in communal accommodation for people experiencing rough sleeping. Finally, this review demonstrates that factors such as community prevalence can influence how effective different mitigation measures are. For instance, a modelling study estimated that universal mask-wearing on its own would reduce the infection rate in the shelter by 86% when the community prevalence was low, but only by 56% when community infection rates were high. This is important because during periods of low community prevalence individual mitigation measures may be sufficiently effective. France and Canada’s national lockdown was stricter than in the USA, 53 where large social and religious gatherings still occurred. 54 Thus, national policies and behaviours and attitudes of residents in the surrounding communities may also influence the effectiveness of mitigation measures. Limitations of study There are, however, some caveats of this review that need to be considered. To begin with, this review did not include pre-print servers which may have resulted in some available evidence being missed. Additionally, the study designs of the literature captured by this review are too limited to allow concrete recommendations on individual mitigation measures for policymakers and accommodation providers to be provided. Finally, variability in the types of communal accommodations reported on and the country-level differences in the COVID-19 landscapes in the captured studies make relating the findings from this review to the UK setting more difficult.
Conclusions This review reveals that the available evidence to assess the effectiveness of COVID-19 mitigation strategies in communal accommodations for people experiencing rough sleeping is weak. Yet, together it suggests that even though no intervention or ‘package of interventions’ is likely to prevent outbreaks, they can be used to reduce SARS-CoV-2 infections and COVID-19 complications in this setting. Combining the opening of additional accommodations to reduce the density in communal shelters, universal mask-wearing and proper hygiene practices (i.e. hand washing, less face touching, and good coughing etiquette) may help reduce infection rates in communal accommodations for people experiencing rough sleeping. However, the evidence also suggests that situational factors such as community prevalence will play a role in the efficacy of implemented mitigation packages. It is unclear whether other individual or combinations of mitigation strategies not assessed here could prevent outbreaks or further reduce infection risks in communal accommodations for people experiencing rough sleeping. Thus, better quality research is urgently needed in this area.
Abstract Background Accommodations with shared washing facilities increase the risks of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection for people experiencing rough sleeping and evidence on what interventions are effective in reducing these risks needs to be understood. Methods Systematic review, search date 6 December 2022 with methods published a priori . Electronic searches were conducted in MEDLINE, PubMed, Cochrane Library, CINAHL and the World Health Organization (WHO) COVID-19 Database and supplemented with grey literature searches, hand searches of reference lists and publication lists of known experts. Observational, interventional and modelling studies were included; screening, data extraction and risk of bias assessment were done in duplicate and narrative analyses were conducted. Results Fourteen studies from five countries (USA, England, France, Singapore and Canada) were included. Ten studies were surveillance reports, one was an uncontrolled pilot intervention, and three were modelling studies. Only two studies were longitudinal. All studies described the effectiveness of different individual or packages of mitigation measures. Conclusions Despite a weak evidence base, the research suggests that combined mitigation measures can help to reduce SARS-CoV-2 transmission but are unlikely to prevent outbreaks entirely. Evidence suggests that community prevalence may modify the effectiveness of mitigation measures. More longitudinal research is needed. Systematic review registration PROSPERO CRD42021292803.
Conflict of interest There are no conflicts of interest to declare. Funding This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors. Authors’ contributions SH contributed to the design of this research and participated in literature searching, data extraction, data synthesis/analysis, manuscript preparation, and reviewing and editing. OC participated in literature searching, data extraction, data synthesis/analysis, manuscript preparation, and reviewing and editing. MX and JS contributed to the design of the research and manuscript preparation and editing. RC, EP, and JL participated in research design and manuscript editing. ICM was responsible for the supervision of the entire project and participated in the design of the research and reviewing and editing of the manuscript.
Steven Haworth, Research Officer Owen Cranshaw, Research Assistant Mark Xerri, Programme Manager: Rough Sleeping Drug & Alcohol Treatment Jez Stannard, Senior Programme Manager: Rough Sleeping Drug & Alcohol Treatment Rachel Clark, Head of Evidence and Evaluation Emma Pacey, Policy Lead for Inclusion Health Gill Leng, National Health and Homelessness Adviser Ines Campos-Matos, Deputy Director for Inclusion Health
CC BY
no
2024-01-16 23:43:51
J Public Health (Oxf). 2023 Jul 20; 45(4):804-815
oa_package/94/d1/PMC10788840.tar.gz
PMC10788841
37328938
Introduction Transient bone osteoporosis (TBO) is a rare, misdiagnosed, self-limiting condition of unclear etiology. TBO is characterized by pain, 1 loss of function, absence of previous trauma, 2 osteopenia on plain radiography and bone marrow edema at magnetic resonance imaging (MRI). 3 TBO typically affects middle-aged men 4 or, less commonly, women during the third trimester of pregnancy and the immediate post-partum period. 5 , 6 TBO usually presents with sudden-onset pain in weight-bearing areas, especially in the lower limb, often radiating distally. 7 , 8 In most patients, TBO leads to functional disability within 4–8 weeks, followed by a gradual disappearance of the symptoms in the following 6–12 months. 9 , 10 The clinical examination might demonstrate limited effusion. Imaging, including plain radiographs, bone scans and MRI, is used for the diagnosis. MRI is fundamental for the diagnosis, evidencing nonspecific and localized bone marrow edema hyperintense in T2 sequences. 6 , 7 , 11 , 12 TBO should be differentiated from bone osteonecrosis and metastases. 13 , 14 Other less common conditions to consider for the differential diagnosis are regional migratory osteoporosis, reflex sympathetic dystrophy, arthritis of various etiologies such as septic arthritis, osteomyelitis and insufficiency fracture. 15 Nevertheless, it remains a diagnosis of exclusion, usually delayed, partly from the lack of awareness. 16 Despite a benign prognosis, the long clinical course causes prolonged disability. Given the limited evidence in the current literature, consensus on optimal management is lacking. In most cases, conservative management allows the resolution of symptoms within 6–12 months. 17 The main conservative approaches include restricted weight-bearing, anti-resorptive medications and analgesics. 6 This systematic review investigates current management of TBO.
Methods Eligibility criteria All the clinical studies, which investigated modalities for the management of TBO, were accessed. According to the author's language capabilities, articles in English, German, Italian, French and Spanish were eligible. Levels I to IV of evidence studies, according to Oxford Centre of Evidence-Based Medicine, 18 were considered. Reviews, opinions, letters, editorials, animals, in vitro , biomechanics, computational and cadaveric investigations were not considered. Search strategy This systematic review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the 2020 PRISMA statement. 19 The following PICO algorithm was established for the databases search: P (Problem): TBO, I (Intervention): clinical management, C (Comparison): conservative modalities, O (Outcomes): clinical outcome. In April 2023, PubMed, Web of Science, Google Scholar, Embase were accessed with no constrains. The following keywords were used in combination using the Boolean operators AND/OR: transient osteoporosis syndrome, transient bone edema syndrome, management, treatment, weight bearing, surgery, conservative, outcome, pharmacology, drugs. Selection and data collection Two authors (F.M. & G.V.) independently conducted the databases search. All the resulting titles were screened by hand and the abstract of the articles, which matched the topic was read. The full texts of the abstracts of interest were accessed. The bibliographies of the full-text articles were also screened by hand. Any disagreements were discussed and settled by a third senior author (N.M.). Data items Two authors (F.M. & G.V.) independently performed data extraction. The following data were extracted: author and year, name of the journal and study design, length of the follow-up, number of included patients, type and number of joints, mean age and body mass index (BMI) of the included patients, number of women, type of treatment and main findings. Study risk of bias assessment The methodological index for non-randomized studies (MINORS) was performed to evaluate the quality of the included article. 20 The MINORS involves eight items for non-comparative studies and 12 items for comparative studies. The MINORS optimal global score is 16 points for the non-comparative studies and 24 points for the comparative studies. Synthesis methods For descriptive statistics, the IBM software version 25 was used. The arithmetic mean and standard deviation were used for continuous variables.
Results Study selection The literature search resulted in 416 clinical studies. Duplicate records ( N = 107) were excluded. A further 285 articles were excluded with reason: not matching the topic ( N = 181), inappropriate study design ( N = 98), language limitation ( N = 2) and full-text not available ( N = 1). A further three articles did not report quantitative data under the outcome of interest, and were thus excluded. This left 21 articles for inclusion. The flow chart of the literature search is shown in Figure 1 . Methodological quality assessment Based on the MINORS scale, the 20 non-comparative studies had a medium score of 6.6 and the only comparative study scored 20 points. The MINORS attested to the present study a low quality of the methodological assessment ( Table 1 ). Synthesis of results Data from 65 patients (74 treated joints) were collected. About, 23% (15 of 65) were women. The mean length of the follow-up was 14.4 months. The mean age of the patients was 38.1 ± 10.4 years, and the mean BMI was 28.8 kg/m 2 . The conservative management of TBO proved to be effective at middle and long-term follow-up, evaluating the resolution of symptoms and MRI findings. Treatment with bisphosphonates seems to alleviate pain and accelerate both clinical and imaging recovery. Generalities, patient characteristics and main results of the included studies are shown in Table 2 .
Discussion According to the main findings of the present study, conservative management leads to the resolution of symptoms and MRI findings at midterm follow-up. Administration of bisphosphonates might alleviate pain and accelerate clinical recovery and imaging appearance. Several case reports referred to conservative management with limited weight bearing, physical therapy, and nonsteroidal anti-inflammatory drugs (NSAIDs), reporting full recovery in all patients at approximately one year follow-up. 16 , 21–26 , 39 Vaishya et al. 27 conducted a study analyzing 12 hips in 14 patients with hip TBO treated conservatively. All the patients returned to work with a complete resolution of symptoms at 17.1 weeks. 27 At 1.3-year follow-up, no recurrence was observed in any patient. 27 Baishareh et al. performed an observational study on 15 patients with symptomatic hip TBO. 28 The mean age of the patients was 41 years. 28 Ten of 15 patients underwent core decompression and 5 patients were treated conservatively. 28 The time needed for full recovery was 5.8 weeks for those who underwent drilling and 48.3 weeks for the three patients treated conservatively. 28 Two patients who underwent conservative management did not achieve full recovery at the time of follow-up. 28 The author hypothesized that hip core decompression could be considered as a treatment modality to achieve faster recovery in patients with hip TBO. 28 Treatment with bisphosphonates has shown promising results, shortening the duration of symptoms. 8 , 9 , 29 Agarwala et al. 30 administered an intravenous single dose of zoledronic acid to 19 adults with hip TBO. At an average of 2.8 weeks, symptoms ceased, with no adverse events and recurrence to the last follow-up at a mean of 35 months. About, 84% (16 of 19) of patients did not demonstrate evidence of TBO at MRI. Berman et al. 3 described the case of a 35-year-old male patient who presented with progressive and disabling pain in his left hip. Risedronate 35 mg once weekly for 12 weeks was administered. Calcium and vitamin D were also supplemented. 3 After 3 months, the patient reported a complete resolution of symptoms and disability. 3 Three years later, following onset of TBO contralaterally, the same treatment was administered, obtaining the same results at 2 months follow-up. 3 Furthermore, they presented the case of a 64-year-old with a two-week history of progressively increasing left knee pain. A high-resolution MRI of his left distal femur revealed deterioration in bone microarchitecture (manifested by trabecular loss and disruption). 3 The regional bone mineral density of his left lateral femoral condyle was 0.96 g/cm 2. 3 The patient continued with routine calcium and vitamin D supplementation. 3 The patient reported spontaneous resolution of his knee pain over months. 3 A further regional knee bone density of the left lateral femoral condyle showed marked improvement of 1.63 g/cm 2 at one year of follow-up. 3 Pande et al. 31 reported a 43-year-old male patient with hip TBO managed by weight-bearing restriction, physiotherapy, administration of alendronate 10 mg daily, and calcium and vitamin D supplementations. At seven weeks, the patient evidenced a complete remission of symptoms with a complete recovery of the full range of motion. 31 At five months follow-up, no evidence of TBO was observed at MRI and the administration of alendronate was discontinued. 31 At seven months follow-up, the patient resumed normal activities. Paoletta et al. 32 described a 46-year-old man with a diagnosis of hip TBO treated with intramuscular clodronate 200 mg for a month and weight-bearing restriction. Significant pain relief, improved motion, and a significant reduction of bone edema at MRI scans was observed at 2 months follow-up. 32 Seok et al. 9 presented the case of a 46-year-old male with a diagnosis of hip TBO, treated with a single dose of intravenous zoledronate 5 mg, weight-bearing restriction and hot packs. 9 Despite a slight pain in the inguinal area, no pain was observed during weight bearing at the two-week follow-up. Additional 2 weeks of limited weight bearing were recommended. 9 At four weeks follow-up, the pain in the inguinal area disappeared. At six months follow-up, no evidence of TBO was observed at MRI. 9 TBO and especially transient osteoporosis of the hip frequently occurs in pregnant women in the third trimester or in the immediate postpartum period. 1 , 7 , 16 , 26 , 33 , 34 , 38 Pregnancy limits the choices of pharmacotherapy. Brodell et al. 35 suggested that the benefit of radiographic imaging may outweigh the potential risks in the third trimester of pregnancy. Furthermore, gold standard diagnostic imaging is via MRI, which should be considered a safe modality in the third trimester. 36 Cesarean section is preferable to vaginal delivery to avoid the risk of trauma to the weak head of the femur in cases of TBO of the hip. 1 The present study has several limitations. The overall quality of the evidence was low. Most of the available data come from case reports and retrospective studies. In this respect, results are not fully generalizable. Given the rarity of TBO, high-quality studies on a larger scale are arduous. Between studies, patient characteristics were heterogeneous. Given these limitations, the results of the present study must be considered with caution.
Conclusion A conservative approach leads to the resolution of symptoms and MRI findings of TBO at midterm follow-up. Administration of bisphosphonates seems to alleviate pain and accelerate both clinical and imaging recovery.
Abstract Introduction Transient bone osteoporosis (TBO) is characterized by persistent pain, loss of function, no history of trauma and magnetic resonance image (MRI) findings of bone marrow edema. Source of data PubMed, Google scholar, EMABSE and Web of Science were accessed in February 2023. No time constrains were used for the search. Areas of agreement TBO is rare and misunderstood, typically affecting women during the third trimester of pregnancy or middle-aged men, leading to functional disability for 4–8 weeks followed by self-resolution of the symptoms. Areas of controversy Given the limited evidence in the current literature, consensus on optimal management is lacking. Growing points This systematic review investigates current management of TBO. Areas timely for developing research A conservative approach leads to the resolution of symptoms and MRI findings at midterm follow-up. Administration of bisphosphonates might alleviate pain and accelerate both clinical and imaging recovery.
CRediT for author contributions Filippo Migliorini (Conceptualization, Data curation, Project administration, Supervision, Writing—review & editing), Gianluca Vecchio (Data curation, Formal analysis, Investigation), Christian Weber (Investigation, Methodology, Software), Daniel Kaemmer (Data curation, Investigation, Software, Visualization), Andreas Bell (Investigation, Methodology, Validation, Visualization), Nicola Maffulli (Conceptualization, Formal analysis, Writing—original draft). Conflict of interest statement The authors declare that they have no conflict of interest. Funding No external source of funding was used. Data availability The data underlying this article are available in the article and in its online supplementary material. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. Informed consent For this type of study, informed consent is not required.
CC BY
no
2024-01-16 23:43:51
Br Med Bull. 2023 Jun 16; 147(1):79-89
oa_package/2c/fc/PMC10788841.tar.gz
PMC10788842
37496202
Introduction The prevalence of childhood obesity has increased globally over several decades. 1 Overweight children are more likely to be overweight adults. 2 It is a risk factor for a range of diseases in later life including cancer, diabetes, cardiovascular disease and osteoarthritis. 3 , 4 These diseases cause a decrease in quality of life, premature mortality and morbidity. 5 , 6 Decreased levels of physical activity (PA), increased levels of sedentary time and an increased caloric intake are some of the factors influencing childhood obesity rates but these are not the only factors responsible. 7–10 Screen-based activities increase children’s exposure to energy-dense food advertisements, leading to children consuming such food items. 11 Reducing sedentary behaviours improves body composition in youths, 12 and exercise is part of the management of paediatric obesity. 13–15 Due to the popularity of video gaming, active videogames (AVG) may be an option to promote healthy living among children. 16 , 17 Playing AVG results in an increased heart rate, oxygen consumption and energy expenditure. 18 AVG may increase PA level sufficiently to produce healthy benefits in children and adolescents. 16 Any increase in PA may produce positive healthy benefits. 19 A lack of enjoyment is an indicator that children will not participate in exercise, 20 therefore using AVG may overcome this barrier, as they stimulate enjoyment. 16 Overweight children spend more time watching television and playing videogames than children who are not overweight. 21 This study will aim to report on whether AVGs can be utilized, either solely or as part of a multi-faceted intervention, to reduce weight and improve body composition in overweight and obese youths.
Methods This review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) criteria. 22 This systematic review and meta-analysis was not registered with PROSPERO database. Literature search In order to assess the effect of AVG on overweight children a systematic literature search was conducted by two researchers (LP, PW) on eight databases from the index date of each database through to October 2021. These databases were SportDiscus; ASSIA; Embase; Medline; CINAHL Plus; CENTRAL; CDSR and PsychINFO ( Appendix 1 ). Inclusion and exclusion criteria are described in Table 1 . A ‘building block’ process (PICOS) was employed in constructing the search. 23 , 24 Because AVG are a relatively recent phenomenon, none of the databases had a thesaurus heading for it. This intervention element was resolved by combining free text terms (using truncation and/or proximity operators) to find references to AVG. We used the combined thesaurus terms for obesity, paediatric obesity, overweight and body mass index (BMI). We conformed to the World Health Organization (WHO) definition of age range for adolescents (10–19). 25 This term was combined with other thesaurus terms for children, excluding infants, to encompass an age range of 2–19. Three researchers (MB, LP, PW) independently screened the retrieved papers by titles and abstracts and the eligible studies were further screened by full text. The search was supplemented with reference and citation tracking of studies included in the qualitative analysis. 26 Only studies that were published in English were included. Studies that were not randomized were included in the qualitative but not the quantitative analysis. Studies were first selected if the inclusion criteria were met for the qualitative analysis and further criteria were then applied to these studies to determine which would be selected for the quantitative analysis. Quality assessment The quality of the studies included was assessed using the Public Health Critical Appraisal Checklist. 27 The checklist consists of 23 criteria, including whether the study design, sampling and data collection was appropriate; whether confounders were considered; whether the study was ethical; whether the statistics were appropriate; and whether the results of the study are relevant to public health practice. Each study was assessed against the check list, 27 and was determined to be of high, moderate or low quality depending on how many criteria the study satisfactorily fulfilled on the checklist. 27 The more the criteria that were satisfactorily fulfilled, the higher the quality of research study was deemed to be. The quality of the included studies was assessed by two authors independently and a consensus was reached where there was disagreement. Qualitative analysis A narrative summary of the main findings of the papers included in the qualitative analysis is provided. Quantitative analysis All randomized controlled trials (RCTs) comparing exergaming with controls in terms of weight, BMI, BMI Z-scores (or BMI percentiles) and/or body mass composition (percentage or fat and/or lean mass) were included in the quantitative analysis. BMI percentiles, where present, were converted into BMI Z-scores. Subsets of data published in studies already included in this systematic review and meta-analysis were excluded. Interventions in the control groups were either nothing or recommendations on PA or other schemes of non-exergaming PA. Where more than two groups of exergaming PA were present in the same study, their data were merged into one intervention group for the purposes of the comparison with controls. Two investigators (FDN, LP) extracted data independently. Disagreements were resolved by consensus after contacting the authors of the included studies. Where three or more studies reported the same outcome, a meta-analysis examining pooled effect estimates with 95% confidence intervals (CI) was performed using both fixed and random effects models and using the random effects model in interpretation of findings. Heterogeneity was assessed with the I 2 index. The overall effect was tested using Z-scores and statistical significance was set at P < 0.05. Where possible, subgroup analyses were performed using data from intention-to-treat studies only. Data analyses were performed using Review Manager (RevMan) version 5.3. 28
Results A total of 168 unique articles’ titles and abstracts were screened. A consensus was reached on 30 articles for full-text appraisal; 12 articles entered the qualitative analysis, seven entered the quantitative analysis ( Fig. 1 ). A total of 10 individual interventions reported in 12 studies were included ( Table 2 ); 29–40 10 studies were RCT 29 , 32–40 and two were pilot studies. 30 , 31 The sample sizes varied between 4 and 327, with the ages between 7 and 19 years. The studies were conducted in Canada, 29 Brazil, 30 USA, 31 , 36–40 Iran 33 and New Zealand. 32 , 34 , 35 The length of the interventions varied between 6 and 24 weeks. The two pilot studies consisted of only one intervention group each with no control group. 30 , 31 Nine of the RCTs used a control with no intervention. 32–40 Four studies were conducted in a lab. 29 , 30 , 31 , 40 One study conducted the active video gaming within the community but in a facilitated setting. 31 Five studies provided the participants with the AVG to take home. 31 , 33–36 One study was conducted within a school. 38 Quality assessment Using the Public Health Critical Appraisal Checklist, 27 five studies were of high quality 32 , 34 , 35 , 37 , 39 ; four studies of middle quality 29 , 31 , 33 , 40 and three studies of low quality. 30 , 36 , 38 Four studies do not provide information on what equipment they used to measure the height and weight. 31 , 36 , 38 , 39 The other studies all used stadiometers to measure height 29 , 30 , 32–35 , 37 , 40 and measured weight using bioelectrical impedance scales, 29 , 33 calibrated scales, 30 Salter scales, 32 , 34 , 35 digital scales 37 and calibrated electronic scales. 39 Qualitative analysis Characteristics of the included studies are summarized in Table 2 Analysed outcomes were weight (8 out of 12 studies), 29–36 BMI (8/12), 29–36 BMI Z-scores (8/12), 31–35 , 37 , 39 , 40 waist circumference (4/12), 29 , 32 , 34 , 35 body fat percentage (5/12),, 29 , 30 , 32 , 34 , 35 fat mass (4/12), 29 , 32 , 34 , 35 BMI percentile (4/12), 29 , 30 , 33 , 38 fat-free mass (3/12), 32 , 34 , 35 weight Z-score (2/12), 33 , 37 abdominal circumference (1/12) 30 and fat mass percentile (1/12). 37 A total of 11 studies (11/12) reported a significant decrease in at least one weight outcome. 29–39 One study reported a significant decrease in body fat percentage but only when two intervention groups, cycling to AVG or music, were combined. 29 Three interventions reported a significant decrease in BMI. 30–35 Four interventions demonstrated a significant reduction in BMI Z-scores 31 , 32 , 34 , 35 , 37 , 39 and one did not. 40 In one study, a significant reduction was only observed after an outlier had been removed from the control group who had decreased their BMI Z-score by 3.3 standard deviations below the mean. 37 The cooperative group of the intervention by Staiano et al . 38 described a significant drop in BMI percentile compared with the control, though this was not mirrored in the competitive arm. One study reported a significant decrease in abdominal circumference. 30 Several interventions recorded a significant decline in body fat percentage 29 , 32 , 34 , 35 and significant decrease in fat mass. 32 , 34 , 35 Certain interventions consisted of an AVG only intervention and control (7/12). Maddison et al . 34 reported a significant reduced BMI (0.24) ( P = 0.02), BMI Z-score (0.06) ( P = 0.03), weight (0.72 kg) ( P = 0.02), body fat percentage (0.83%) ( P = 0.02) and body fat (0.8 kg) ( P = 0.05). No significant differences were identified when a sub-group analysis was performed between ethnicity and sex. 32 Irandoust et al . 33 report a significant decrease in body weight and BMI. Murphy et al . 36 described a significantly smaller weight gain in the intervention group of 0.91 pounds compared with an increase of 2.43 pounds in the control group ( P = 0.017). There was a non-significant change in BMI. Staiano et al . reported a significant decrease in weight Z-score in both the intent-to-treat intervention group of −0.1 (standard deviation 0.05) versus control 0.04 (standard deviation 0.05) ( P = 0.049) and without the outlier (intervention −0.09 (0.05) versus control 0.07 (0.04) ( P = 0.022)). There was no significant difference for BMI Z-score with the intent to treat analysis; however, without the outlier, there was a significant change (intervention −0.06 (0.03) versus control 0.03 (0.03) ( P = 0.016)). 36 Wagener et al . 39 described no significant change in BMI Z-score. Carrasco et al . 30 describe a significant decrease in body mass (47 to 45 kg) ( P = 0.0018), BMI (23.02 to 22.22) ( P = 0.0005) and abdominal circumference (81.82 to 79.97 cm) ( P = 0.0223). Christison et al. 31 described a study with a significant decrease in BMI (31.07 to 30.59) ( P = 0.002) and BMI Z-score (2.24 to 2.17) ( P < 0.0001). Staiano et al . 38 describe a study consisting of two similar intervention groups (cooperative and competitive) and a control group. The cooperative group experienced a significantly greater weight loss than the control (mean = 1.65 kg, P = 0.021) with the BMI percentile decreasing from 93.93% to 84.74%. There was no significant difference in weight between the competitive group and the control. Adamo et al. 29 compared cycling with an AVG and cycling to music. There were no significant changes on body weight, BMI, fat mass, free fat mass and waist circumference. There was a small significant reduction in body fat percentage when the two groups were combined. Trost et al. 39 compared two groups which consisted of a weight management program, one with AVG and one without. The AVG group experienced a greater reduction in BMI Z-score (0.14) ( P < 0.001). Quantitative analysis Seven studies entered the quantitative analysis ( Fig. 1 ) 29 , 34 , 36–40 Table 3 shows the main characteristics of the RCTs that entered the quantitative analysis and results of the data extraction. Four RCTs, accounting for 358 randomized subjects, reported data on weight, 29 , 34 , 36 , 38 three, accounting for 319 individuals, reported data on BMI 29 , 34 , 36 and five, accounting for 433 randomized subjects, reported BMI Z-scores allowing calculation of pooled effects estimates. 29 , 34 , 37 , 39 , 40 Subjects who underwent AVG were more likely to show lower weight compared with the control groups (mean difference [random effect]: −2.66 Kg; 95%CI: −5.67, +0.35), with no heterogeneity (I 2 = 0%). These differences were not statistically significant ( P = 0.08, Fig. 2 ) and this was still true after selecting intention-to-treat studies only. Participants in the exergaming groups also showed lower BMI values (mean difference [random effect]: -2.29; 95%CI: −4.81, +0.22) with high heterogeneity (I 2 = 49%) and no significant results ( P = 0.07, Fig. 3 ). This is possibly due to heterogeneity, as the same outcome showed significant results using the fixed effect model (−1.29; 95%CI: −2.20, −0.38; P = 0.006). BMI Z-score was significantly reduced among the exergaming population (mean difference: −0.09; 95%CI: −0.12, −0.05) but, again, with high heterogeneity (I 2 = 34%) and statistically significant results ( P < 0.0001, Fig. 4 ).
Discussion Main findings of this study A total of 11 studies reported significant results in at least one weight outcome. A significantly lower BMI Z-score was observed within the meta-analysis. Four interventions measuring BMI 30 , 31 , 33 , 34 and four interventions calculating BMI Z-scores 31 , 34 , 37 , 39 resulted in a significant decrease from baseline. Similar results have been found in a comparable intervention carried out in children of mixed weights. 41 The results of meta-analysis are likely due to high heterogeneity, small samples and small number of available RCTs. Heterogeneity can be explained by the inconsistent methodologies in these studies: the interventions varied, and even the controls. Only a limited number of subgroup analyses were possible due to the low number of included studies. Due to the low number of studies included in the meta-analysis, it was not possible to produce funnel plots and analyse for publication bias. What is already known on this topic Exergaming technologies are relatively new and have not been considered as a viable weight control or PA promotional methodology by many researchers. AVG have been shown to elicit a higher energy expenditure in children compared with sedentary activities and have shown to increase heart rate, oxygen consumption and energy expenditure, similar to that of light to moderate PA in children. 18 , 42 , 43 However, it is less well known whether children using AVG as exercise would play with sufficient vigor and frequency to gain cardiovascular or health benefits. 43 The enjoyment that AVG are planned to stimulate may be essential to their ability to promote PA and thereby help children manage their weight more healthily. Boredom has been shown to be a barrier to long-term AVG play. 16 Longer interventions in this review experienced higher dropout rates. 34 , 38 One potential option of overcoming the barrier of boredom of AVG play after prolonged use is to use multiple games. 31–35 , 37 , 39 What this study adds This systematic review and meta-analysis investigated the effectiveness of AVG to aid in the weight management of overweight children and adolescents. The results suggest that AVG can help reduce the increasing trend of childhood obesity, either solely or within a well-established weight management program. Exergaming can be utilized as one component in a multi-focal weight management intervention or as the sole constituent. Many interventions employed AVG as the sole component. 29 , 30 , 32–38 , 40 Two incorporated AVG into a holistic weight management program. 31 , 39 Combining a weight management program with AVG can improve the health outcomes compared with a more traditional weight management program. 39 This may be because children playing AVG participated in more exercise compared with children who did not. 39 Another intervention demonstrated similar results. 31 It has been suggested that a holistic approach is needed for public health interventions to be successful. 44 Therefore, it may be beneficial to combine AVG with interventions that target other behavioural changes. AVG-only interventions have been shown to cause a decrease in weight outcomes in overweight children 29 , 30 , 32–39 and this may be because children are replacing sedentary behaviours with PA. 19 A weight management intervention which included AVG resulted in children decreasing their non-AVG play by ~9 min/day and increasing their AVG play by 10 min/day compared with controls. 34 This may minimize calorific intake by reducing exposure to snack food advertisement. 45 Children increase in weight as they’re growing. The best outcome to evaluate weight control is therefore the BMI Z-score. 46 Body mass composition could be taken into consideration, but its distribution in the population is affected by gender and age (normal fat mass percentages are higher in females and change over time). Interventions in all settings produced significant findings towards a healthier trend. It may be advantageous to run AVG interventions from children’s homes as it may be easier for parents to encourage their children to play AVG rather than encouraging them to abstain from videogames altogether. 34 Researchers may want to cooperate with the videogame industry to produce games that aim to control weight and achieve the recommended PA levels in such a way that the activity is enjoyable and sustainable. All games examined in these trials were created with the purpose of entertaining a broad audience. However, if they were designed with the express aim of producing positive effects on the health of children and adolescents while incorporating the principles of the evidence-based medicine, then perhaps we could observe more robust results. Future studies should focus on interventions with bigger sample sizes and longer follow-up period to observe if AVG can result in a prolonged change in body composition in overweight youths. Limitations of this study Limitations of the individual studies include small sample sizes, high dropout rates, lack of long-term follow-up and low number of weight outcomes measured. One study was limited as the children were very overweight and therefore most interventions would be more likely to have a positive significant effect. 30 Only three interventions had a follow-up period that lasted several months and all of these had the highest dropout rates. 33 , 34 , 38 The follow-up period in these studies is possibly still not sufficient to determine the long-term effects of AVG on weight outcomes of overweight children. Out of the 12 studies included in the qualitative analysis, only two 30 , 32 are based in low-/middle-income countries and none of the studies included children below the age of 7. This could limit the generalizability of the results of this review to older children from higher income countries. Grey literature was excluded from the search since it was found to be seriously affected by the marketable nature of the products examined. Literature with commercial purposes tends to highlight the positive features ignoring potential adverse effects for health, while the media sometimes exaggerates the negative effects and can even exhibit hostility to videogames. However, strengths of this study must also be considered. This is one of the first studies to summarize the literature on AVG use in weight loss in overweight youths and quantify the effectiveness of interventions using a meta-analysis. AVG are a recent development and therefore our study has looked at a novel tool that is designed to stimulate interest, maintain engagement and has the potential to shift some screen time from sedentary to active, to aid in weight loss and improve body composition in overweight children and teenagers.
Conclusions Although only BMI Z-score was significantly reduced in the AVG group, results are still promising, as 11 of 12 studies reported at least one significantly improved weight outcome, suggesting that more RCTs with standardized methodology, bigger samples, intention-to-treat protocols, longer follow-up, children and teenagers from all age groups and assessment of BMI Z-score and body mass composition could find beneficial effects of AVG on weight control in overweight children and adolescents. Such studies are thus encouraged. Videogame industry and researchers could cooperate to produce evidence-based exergaming strategies that are suited for children and adolescents and aim at controlling weight and achieving internationally recommended PA levels.
Abstract Background The prevalence of childhood obesity has been increasing for several decades. Active video games (AVG) may be an effective intervention to help manage this rising health crisis. The aim of this review is to evaluate whether AVG are effective at reducing weight or improving body composition in overweight youths. Method Medline, Embase, SportDiscus, ASSIA, CINAHL Plus, CENTRAL, CDSR and PsychINFO databases were searched for studies assessing quantitative or qualitative impact of AVG in overweight adolescents published in English. Three authors screened the results using inclusion/exclusion criteria. Results A total of 12 studies met the inclusion criteria; 11 reported a significant decrease in at least one weight outcome. Results from seven randomized controlled trials were pooled by meta-analysis, which compared with controls subjects in AVG groups demonstrated greater body mass index (BMI) Z-score reduction (mean difference: −0.09 (−0.12, −0.05) I2 = 34%, P < 0.0001). The mean weight reduction (−2.66 Kg (−5.67, +0.35) I2 = 0%, P = 0.08) and BMI (−2.29 (−4.81, +0.22) I2 = 49%, P = 0.07) were greater in AVG groups but results did not reach statistical significance. Conclusions BMI Z-score was significantly reduced in the AVG group and the majority of included studies reported significant results in at least one weight outcome, suggesting AVG can be used to reduce weight or improve body composition in overweight youths. Further studies investigating the long-term sustainability of this change in body composition are needed.
Supplementary Material
Abbreviations PA, physical activity; AVG, active videogames; WHO, World Health Organization; BMI, body mass index; RCT, randomized controlled trials Conflict of Interest The authors have no conflict of interest to disclose. Funding No funding for this study. Financial disclosure The authors have no financial relationships relevant to this article to disclose. M. Bourke, Paediatric Doctor L. Patterson, Statistician F. Di Nardo, Statistician P. Whittaker, Public Health Doctor A. Verma, Public Health Doctor
CC BY
no
2024-01-16 23:43:51
J Public Health (Oxf). 2023 Jul 26; 45(4):935-946
oa_package/30/6b/PMC10788842.tar.gz
PMC10788843
37164906
Introduction Chronic low back pain is extremely common and mainly affects patients over 60, with a prevalence of about 70%, 1–4 worsening the quality of life of patients and imposing negative economic consequences on health care systems. 1 , 5 Recently, biological therapy with mesenchymal stem cells (MSCs) has been introduced in the management of discogenic pain and degenerative disc disease (DDD). 6 Back pain of discogenic origin has a multifactorial pathogenesis, and genetic factors, age, body mass index (BMI), smoking, work activity and trauma contribute to the development of the pathology. 7–15 Aging is accompanied by profound modifications of the intervertebral disc, including alterations of the normal anabolic/catabolic balance, which normally keeps the intervertebral disc intact. 16 The nucleus pulposus loses water, and calcific areas induce a lower capacity to distribute load with a reduction of the intervertebral space. 13 , 17 The lower synthesis of type I collagen, the main constituent of the fibrous annulus, progressively reduces the elastic properties of the nucleus pulposus, favouring protrusion and herniation. 18–21 In addition to collagen, age-related changes also affect proteoglycans and the extra-cellular matrix (ECM). Generally, the ratio of chondroitin sulphate to keratan sulphate is in favour of the former; with age, this ratio is reversed, reducing hydrophilia. 22–24 The metalloproteinases of the ECM are less subject to inhibitory control; in addition, degenerative processes induce an acidic environment that further promotes the activation of these enzymes, which participate in the degenerative processes of the disc. 25 All the alterations to the disc, together with the continuous mechanical stresses to which the spine is subjected, affect the adjacent nerve structures and manifest with the appearance of pain. 26 A high BMI increases the load on the discs, with possible earlier onset of discogenic pain. 27 The management of discogenic low back pain can be conservative or surgical. 28 Generally, the initial approach is conservative and includes nonsteroidal anti-inflammatory drugs (NSAIDs), muscle relaxants, opioids and physiotherapy. 29 , 30 In most patients, conservative management should be attempted before surgical treatment, as local and systemic complications may occur following surgery, including deep vein thrombosis, infection and myocardial infarcts. 31–33 In spinal fusion, for example, in addition to the risks of non-union and hardware failure, alterations to the adjacent upper and lower vertebral segments are common due to abnormal load distribution. 34 Recently, stem cell therapies have been increasingly studied to promote regeneration of the disc structures that determine the onset of symptoms. Degenerative discopathy seems to be responsible for 40% of low back pain. 35 The intervertebral disc has its own multipotent stem cells, with progenitor cells both in the nucleus pulposus and the annulus fibrosus, with markers typical of MSCs. 36 These stem cells can differentiate and participate in regenerative processes. 37–40 With age, these cells progressively reduce, affecting the repair capabilities of the intervertebral disc. In the annulus fibrosus, progenitor cells can differentiate into different cell lines, such as adipocytes, chondrocytes, osteoblasts and endothelial cells. 41 Other stem cells, both adipose and medullary, can differentiate into cells with characteristics similar to those of the nucleus pulposus under appropriate stimuli. 42–44 In vitro , inoculated MSCs can develop phenotypic features similar to the disc own cells, capable of synthesizing the different matrix components when stimulated by growth factors such as Transforming Growth Factor-β (TGF-β), 45–47 growth differentiation factor 5 (GDF5) and growth differentiation factor 6 (GDF6) belonging to the TGF family. In these studies, GDF-5 favoured the phenotypic differentiation of bone marrow (BM) stem cells into cells of the nucleus pulposus by promoting the synthesis of type II collagen, 48 but it did not stimulate the production of proteoglycans, as TGF-β1 did. 49–51 Therefore, in stem cell therapy, it is important to consider both the type of stem cells and the growth factors used in combination with them, as well as the use of scaffolds. Patients in whom stem cell therapy would be indicated present with early disc degeneration and mild to moderate pain, and failure of conservative therapy. Ideal patients are those with degenerative involvement of a single Pfirrmann Grade III–IV disc. 52 This review defines the current knowledge on the effectiveness of biological therapy using MSCs in patients with discogenic back pain.
Methods This study and its procedures were organized, conducted and reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines 53 ( Fig. 1 ). Eligibility criteria We searched studies about the use of stem cells in the management of discogenic back pain. Studies included in the search are case reports and case series, clinical trials and systematic reviews. We excluded animal studies, editorials, narrative reviews and articles in which stem cells were used in combination with confounding factors that could affect the outcome such as PRP. Data sources and search We performed an exhaustive search of all databases associated with PubMed and Scopus up to April 2023, using the following key words: MSCs, stem cells, back pain, discogenic back pain, intervertebral disc degeneration. Study selection The articles resulting from the search were evaluated independently by two orthopaedic residents. A researcher experienced in systematic review solved cases of doubt. The initial selection of articles was based on the title and reading of the abstract. In accordance with inclusion and exclusion criteria previously reported, the articles considered relevant to the aim of the study were selected. Subsequently, these articles were read in their entirety to ascertain their actual relevance to the purposes of this review. Data collection The data extracted from reading the articles included in the present systematic review were collated in an Excel database. Doubts and inconsistencies were followed and solved by discussion. The features analysed include: Type of stem cells employed Characteristics of the culture medium Clinical scores MRI outcomes Methodological assessment We used the Modified Coleman Methodology Score (MCMS) 54 criteria to assess the studies reviewed ( Table 1 ). A score from 0 to 100 is assigned to each study; a score of 100 indicates a study in which there are no confounding factors or bias. The MCMS was correlated with publication year to examine the chronological trend in methodology. 54
Results The initial search produced a total of 601 articles. After removal of duplicates, we obtained 339 articles. After the first abstract and title analysis, we excluded 95 articles. From the 244 remaining articles, we excluded 146 articles after full-text assessment. A total of 14 articles were included in the present review ( Table 2 ). Ten of 14 studies used stem cells derived from the BM. Three of these studies used Bone Marrow Concentrate. In one of these studies, stem cells were cultured next to the nucleus pulpous (NP). In the remaining four studies, different stem cells were used [adipose, umbilical, chondrocytarian origin (NuQu® allogeneic juvenile chondrocytes)] and a pre-packaged product, Mesoblast (MPC-06-ID, Mesoblast), was also employed. The details of the culture are reported in Table 3 . Stem cells were mixed with other substances before injection. In three studies, a platelet lysate was used; in two, a saline solution; in four, hyaluronic acid; in one, fibrin; in 1one collagen sponges were used. The injection volume varied between 1 and 3 ml. Yoshikawa et al. used collagen sponges with a volume of 10 ml. 67 All studies reported beneficial results of stem cell therapy, with improvements in pain, strength and return to daily and work activities. Different scores were used. The most commonly used are VAS and ODI, used in 9 of 14 and 10 of 14 studies, respectively. Other scores were: SF-36, used in 5 of 14 studies; NRS, in 2 of 14; JOA, in 2 of 14. In relation to the VAS, 5 of 8 studies used a scale from 0 to 100, 2 of 8 from 0 to 10 and 1 study did not report such data. Using the t student between ODI pre and post management, the P -value is 0.0004; similarly for the VAS score, the P -value is <0.0001. The details of the different scores are reported in Table 4 . The MRI baseline characteristics of early stage patients were disc hydration, height, bulging or protrusions and annulus tears. The MRI was repeated at follow-up to identify any changes in these characteristics. In 7 of 14, studies, the water content of the disc was evaluated with the MRI T2-weighted sequence, evidencing that hydration had increased. The height of the disc was assessed in 8 of 14 studies, with encouraging results related to the conservation or increase of the height of the discs. Bulging was evaluated in 4 of 14 studies, with a reduction in at least 23% of cases. In 6 of 13 studies, the condition of the spine was graded using Modic criteria, from grade I to III; 6 of 14 studies used the Pfirrmann grading system, from grade I to V; finally in 2 of 14 studies the Modified Dallas Discogram Description from a grade 0 to IV was used. In 3 of 14 studies, the most common adverse effect was injection pain, treated with NSAIDs and opioids. The use of subsequent surgical treatment was considered as failure of stem cell therapy; this occurred in 4 of 303 patients. MCMSs Calculating the Pearson’s correlation coefficient between MCMS and the year of publication ( Fig. 2 ), we obtained a positive association ( r = 0.48, P -value 0.1). In recent years, there was not an improvement in methodology. The mean MCMS score was 69.64. Table 5 reports mean, SD and range for each MCMS criteria.
Discussion MSCs have been used for regenerative therapy in different musculo-skeletal conditions. MSCs have been shown to be effective and safe in osteoarthritis and meniscal, and tendon and ligament injuries. 68 MSC can be obtained from different tissues: fat, BM and umbilical cord. Stem cells derived from the BM are the most commonly studied, although stem cells derived from adipose tissue are more numerous. Adipose tissue-derived mesenchymal stem cells (AT-MSCs) have a lower capability to differentiate in chondrocytes; in some studies, preculture with NP cells was performed to increase their regenerative capability. 69 Stem cells derived from the umbilical cord are used for their low immunogenicity. Discogenic back pain is one of the most common conditions affecting individuals between the fifth and seventh decade, and it is estimated that in 2050 over 2 billion people will be over 60. 70 There is no association between pain and MRI appearance. 71 During the progression of this chronic condition, there is a shift from type I to type II collagen with progressive dehydration of the ECM and consequent reduction of the mechanical support capability of the disc. 16 Cell transplant therapy, involving both MSC and NP, has resulted in increased water content in the disc and consequent height restoration in both in vivo and human studies. 36 , 72–74 The percutaneous implantation of MSC may induce pain relief with three mechanisms: inhibition of nociceptors, reduction of catabolism and repair of tissues. Noriega et al. 75 used stem cells derived from allogeneic marrow without adverse events. They quantified the slope of pain relief from baseline to compare between the various trials, and an efficiency of allogeneic of 0.28 versus autologous MSCs of 0.71 was documented. Some studies used NP cells to prevent ‘ graft versus host disease ’, but these cells had a poor capacity for ECM regeneration. 64 Mochida et al. 64 cultured NP cells together with MSC to increase the synthetic capacity of autologous NP cells and reduce the risk of GVHD. Umbilical MSCs could differentiate into NP when cultured with them. 76 Coric et al. 52 used allogeneic chondrocytes to avoid damage to the already damaged NP, further aggravating the pathology. Cells from young patients showed a greater ability to synthesize ECM, without causing GVHD. One aspect to consider is the low-oxygen environment of the disc, which is also required for successful MSCs culture. Indeed, cells grown at normal oxygen concentrations induced an increase in disc hydration, but not in height. 66 Several studies reported on the cross-talk between the injected MSC and the native NP cells, in particular the TGF-beta signalling system, hypothesizing a major role in the regeneration of ECM. 77 Overall the studies included in this review indicate that percutaneous injection of intradiscal MSC was safe and resulted in a high success rate. A multicentre study 58 evaluated four types of therapies (Growth factor BMP-7, Active fibrin sealant, Growth factor rhGDF-5, MSC), comparing them to placebo (saline solution) and obtaining good results. A possible effect of the injection of saline solution is the dilution of the cytokines responsible for inflammation. 78 Noriega et al. 63 obtained interesting results in relation to the time of follow-up. In the control group, which received an injection of local anaesthetic, they obtained a decrease of VAS within 8 days from the administration, without further improvements; the ODI worsened during the year of follow-up. Instead, in the study group with MSC, the greatest effect was achieved at about 3 months and maintained at 6 and 12 months follow-up. Different scores were used in the various studies to evaluate the state of degeneration of the disc and, consequently, the eligibility of patients for therapy. Patients with complete annular fissuration could not be treated because of disc incontinence. During the injection, Kumar et al. 60 suspended the MSCs with a derivative hyaluronic acid, aiming to reduce or prevent the dispersion of stem cells and any differentiation in osteoblasts. MSCs can differentiate into fibroblasts 59 and strengthen the annulus, preventing herniation by depositing new collagen fibres. In fact, 85% of patients showed a reduction in posterior bulge. A reduction of at least 25% of the bulge decreased the pain significantly. Only one case of herniation that required surgery was reported after 5 months. This complication could have resulted from needle injection, excessive proliferation of MSCs or excessive production of ECM. Among the complications related to the injection of MSCs is the formation of osteophytes in the tissues surrounding the injection site. 79 When conservative therapy failed, it is possible to use different surgical methods, 7 , 80 but these have several complications: dural lesions, infections and epidural hematomas. 81–84 In stabilization of the spine, for example, by limiting the movements of the affected section of the spine, the stress imposed to the adjacent vertebrae is increased, contributing to the degeneration of those discs. Pettine et al. reported about the reduced length of hospital stay with MSC compared with surgical treatment, which involves 5 days in hospital. 61 Despite this, some failure necessitated surgical treatment; for example, three patients were treated surgically between 6 and 12 months after implantation of MSC for persistent pain. 52 New therapeutic approaches aim to induce the migration of MSCs to the damaged site and warrant further exploration. 85 , 86 The limitations of this study are related to the low number of articles, the lack of data on patients, the aetiology of discogenic back pain, the type of culture medium and the solution injected, and the use of different clinical scores in the various studies. All these do not allow to obtain homogeneous results regarding treatment efficacy.
Conclusion Stem cells are a promising potential resource to be exploited in the management of musculoskeletal conditions associated with aging, in which the cellular regenerative capabilities can be employed. Further research efforts should define the actual effectiveness of MSCs in the different areas of their use.
Abstract Background Chronic low back pain, common from the sixth decade, negatively impacts the quality of life of patients and health care systems. Recently, mesenchymal stem cells (MSCs) have been introduced in the management of degenerative discogenic pain. The present study summarizes the current knowledge on the effectiveness of MSCs in patients with discogenic back pain. Sources of data We performed a systematic review of the literature following the PRISMA guidelines. We searched PubMed and Google Scholar database, and identified 14 articles about management of chronic low back pain with MSCs injection therapy. We recorded information on type of stem cells employed, culture medium, clinical scores and MRI outcomes. Areas of agreement We identified a total of 303 patients. Ten studies used bone marrow stem cells. In the other four studies, different stem cells were used (of adipose, umbilical, or chondrocytic origin and a pre-packaged product). The most commonly used scores were Visual Analogue Scale and Oswestry Disability Index. Areas of controversy There are few studies with many missing data. Growing points The studies analysed demonstrate that intradiscal injections of MSCs are effective on discogenic low-back pain. This effect may result from inhibition of nociceptors, reduction of catabolism and repair of injured or degenerated tissues. Areas timely for developing research Further research should define the most effective procedure, trying to standardize a single method.
Acknowledgements This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Authors’ contributions Luca Miranda (Conceptualization, Data curation, Methodology), Marco Quaranta (Conceptualization, Data curation, Formal analysis, Methodology), Francesco Oliva (Conceptualization, Formal analysis, Validation, Writing—original draft), Nicola Maffulli (Conceptualization, Data curation, Project administration, Supervision, Validation, Writing—review & editing). Conflict of interest statement The authors have no have no potential conflicts of interest. All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript. The authors have no financial or proprietary interests in any material discussed in this article. Data availability All data generated or analysed during this study are included in this published article.
CC BY
no
2024-01-16 23:43:51
Br Med Bull. 2023 May 10; 146(1):73-87
oa_package/fd/81/PMC10788843.tar.gz
PMC10788844
37675799
Introduction Osteoporosis (OP) is a common metabolic bone disease, with a higher incidence in the elderly and postmenopausal population. 1 Affected patients develop a reduction in bone mass with consequent bone fragility. The bone microarchitecture is altered from an imbalance of function between osteoclasts and osteoblasts. 1 In particular, the increased osteoclastic activity causes fragility that predisposes to fractures after even minimal trauma. 1–3 Advancing age is a predisposing factor, but it is not the cause of osteoporosis. 4–6 Physiologically, in elderly subjects the activity of osteoclasts tends to be greater than that of osteoblasts. In osteoporosis, the activity of the osteoclasts produces excessive resorption, which therefore exceeds the physiological aging of the bone. 5 , 7 The current management of OP aims to re-integrate bone components through the use of calcium, vitamin D; hormones or drugs that act on osteoclastic activity can be used, but the results are often unpredictable, and undesirable side effects are often encountered. 5 , 8 Recent scientific research has focused on the regulatory mechanisms of eukaryotic cells, 9–11 including ribonucleic acid interference (RNAi), 12–15 to identify possible molecular and gene targets to formulate novel therapies. 16–19 Usually, a small interfering RNA (siRNA) is composed of about 20 nucleotides arranged to form a double-stranded ribonucleic acid (RNA) molecule. 12 , 20 , 21 The interference mechanism through which RNAi acts involves various elements, such as detection wire (passenger wire), sense wire (guide wire), enzymes such as Dicer, Argonaute and the central part RISC (RNA-induced silencing complex). The guide wire is a nucleotide sequence recognized by Dicer, which selects it and integrates it into RISC. The guide wire is used to recognize the passenger wire, which will be then degraded by RISC 12 , 22 ( Figure 1 ). The study of siRNA should allow to understand their physiological role, and consequently use their activity to modulate it for therapeutic purposes. The field of application of siRNA is very varied, and gene therapies can be used for viral infections, autoimmune diseases or tumors and endocrinological diseases. 12 , 22 The use of siRNA can reduce the expression of genes involved in several conditions. To date, the sequence of 4894 chemically modified siRNAs is available. 13 , 23 SiRNAs can be used to study human pathologies and the biological processes involved in such pathologies. However, they have a short half-life. Structural chemical modifications are used to increase the half-life of siRNAs, making them more stable. 12 , 22 In OP, the imbalance between bone resorption and bone apposition is determined by a decrease in the activity of osteoblasts and an increase in the activity of osteoclasts, mediated by both hormonal and molecular factors. 24 Specific siRNAs have been used to identify specific targets for potential targeted therapies, or study specific pathways to determine factors and molecules which are increased and decreased in OP. 24 The present review evaluates the current scientific evidence on the use of siRNAs in the management of osteoporosis.
Methods The review follows the Preferred Reporting guidelines for systematic reviews and meta-analyses (PRISMA) 25 , 26 ( Figure 2 ). All published investigations reporting the possible role of siRNA in the management of OP according to a priori established inclusion criteria were considered. Only studies published in English were included in the present investigation. Narrative and systematic reviews, meta-analyses, technical notes and case reports were excluded. Two investigators independently conducted the systematic search, through May 2023, from the full-text archives of Embase, Google Scholar, Scopus and PubMed. In the search, we used combinations of the following key terms: Osteoporosis, Osteoporosis therapy, small interfering RNA, short interfering RNA, RNA silencing, RNA interference, with no limit of year of publication. Two investigators independently examined the titles and abstracts to remove duplicates, and evaluated the eligible studies according to the pre-established inclusion criteria. If titles and abstracts did not allow to decide on inclusion or exclusion, the relevant full text was examined. The bibliographies of the articles included were reviewed by hand to identify further related articles. If discrepancies persisted, discussion with the senior investigator allowed to resolve them. Fourteen studies satisfied the inclusion criteria, and were thus included in the analysis. The details of the search are detailed in the flowchart in Figure 1 .
Results A total of 875 articles were identified. The duplicates were subsequently removed, obtaining 578 articles. At this point, 297articles were excluded after reading the titles and abstracts. Of the remaining 112 articles, 98 were excluded as they were not appropriate for the topics covered or for the incomplete amount of information reported. Data from the 14 studies which met the inclusion criteria were extracted and collected in Table 1 . Of these 14 studies, 12 used siRNAs to silence specific genes, and then identified gene and protein targets to produce a targeted therapy. Another two studies used siRNAs to monitor the function of some drugs used for the management of osteoporosis. SiRNAs as potential therapeutic agents Liu et al. 27 studied human osteoblasts of fractured elderly patients, and rodent osteoblasts. The concentration of pleckstrin homology domain-containing family O member 1 (PLEKHO1) increases with aging, and this is this correlated with a reduction of bone morphogenetic protein (BMP) dependent on small mother against decapentaplegic (SMAD) and bone formation. By using siRNA PLEKHO1, reducing the values of PLEKHO1 could reverse the process of bone aging. siRNA PLEKHO1 may be proposed as a possible treatment for osteoporosis. 27 Adam et al., 28 using human mesenchymal stem cells (hMSC) and specific siRNA, provided evidence that nitrogen-containing bisphosphonates (N-BP) activates the mitogen-activated protein kinases cascade (MERK) 5/extracellular signal-related kinase (ERK) 5, which has an essential role in osteogenic differentiation and mineralization of skeletal precursors. 28 Using specific siRNAs against Guanylate Binding Protein 1, Bai et al. 29 demonstrated that the osteogenetic activity in human mesenchymal stem cells (hMSC) increased when GBP1 was inhibited, and decreased under normal conditions. This result was in line with the higher concentration of GBPs in premenopausal patients, and suggests a possible use of siRNA-GBP1 as a possible therapeutic target against osteoporosis. 29 SiRNAs to test the efficacy of drugs Oxidative stress palys an important role in the progression of osteoporosis. For this reason, Yang et al. 30 studied the effects of the natural antioxidant Tanshinol against oxidative stress on the differentiation of osteoblastic cells. Hydrogen peroxide (H 2 O 2 ) leads to the accumulation of reactive oxygen species (ROS), decreased cell viability, cell cycle arrest and apoptosis in a caspase-3-dependent manner. The action of Thansinol was tested using specific siRNAs against the transcription factor Forkhead box O3a (FOXO3A). Tanshinol suppresses the activation of FoxO3a and the expressions of its target genes. Thansinol neutralizes the action of Growth arrest and DNA-damage-inducible protein 45 alpha (GADD45-α) and catalase (CAT), produced by DNA damage. It also counteracts the binding of Wingless (WNT) to its site of action by targeting genes for axis inhibition protein 2 (AXIN2), alkaline phosphatase (ALP), and osteoprotegerin (OPG). Tanshinol attenuates oxidative stress through the down-regulation of FoxO3a signaling, and at least partially reverses the decrease in osteoblastic differentiation, making it a possible drug in the therapy of osteoporosis. 30 Berberine (BBR) has recently been used in osteoporosis patients. Tao et al. 31 investigated the osteogenic differentiation induced by this drug on bone marrow mesenchymal stem cells (BM-MSCs). For this purpose, they used β-catenin specific siRNA to study cell lines in the presence and absence of BBR. BBR can stimulate the osteogenic differentiation of mesenchymal stem cells (MSC) by improving the expression of Runt-related transcription factor 2 (RUNX2) and activating the WNT/β-catenin signaling pathway, which is partly responsible for the osteogenic differentiation induced by MSC BBR in vitro. BBR is therefore a potential pharmaceutical drug for osteoporosis. 31 SiRNAs to identify potential therapeutic targets Tang et al. 32 studied human mesenchymal stem cells, using a specific siRNA against Alternate Reading Frame Guanosine TriPhosphatease-activating-protein (ARF-GAP) with Ras homolog Guanosine TriPhosphatease-activating-protein (RHO-GAP) domain 3(ARAP 3). They demonstrated a new pathway of osteogenic activation. siRNA ARAP3 led to the recovery of Ras homolog family member A (RHOA) and focal adhesion kinase (FAK) activities, producing an increase in osteogenic activity. This new route could be used to develop novel therapies in osteoporosis. 32 Zhu et al. 33 studied Bone Marrow Mesenchymal Progenitors. Stimulation with conditioned media from parathyroid hormone (PTH)-treated osteoblastic and osteocytic cells, which contain soluble chemotactic factors for bone marrow mesenchymal progenitors, resulted in increased Epidermal Growth Factor Receptor (EGFR) phosphorylation in the treated cells. The study used inhibitors, including specific siRNAs, showing that PTH increases the release of amphiregulin from osteoblastic cells, which acts on EGFRs expressed on mesenchymal progenitors to stimulate the protein-kinase B (PKB) and protein 38 mitogen-activated protein kinase (MAPK) pathways, and subsequently promote their migration in vitro . Subsequently, the inactivation of the EGFR signal on osteoprogenitors/osteoblasts attenuated the anabolic actions of PTH on bone formation. These results suggest a therapeutic role of PTH in osteoporosis through an anabolic effect of EGFR signaling on bone. 33 Mullin et al. 34 performed a knockdown study of Ras homolog (RHO) Guanine Nucleotide Exchange Factor 3 (ARHGEF3) and Ras homolog family member A (RHOA) genes using small siRNAs in human osteoblasts and osteoclast-like cells in culture. Real-Time Quantitative Reverse Transcription C-reactive Protein (QRT-PCR) showed significant down-regulation of the Actin Alpha 2 (ACT- α 2) gene, encoding the cytoskeletal protein alpha 2 actin, in response to RHOA knockdown in both osteoblasts and osteoclasts. RHOA knockdown also upregulated the parathyroid hormone receptor 1 (PTH1R) gene. Knockdown of Rho Guanine Nucleotide Exchange Factor 3 (ARHGEF3) in osteoblast-like cells resulted in down-regulation of the Tumor Necrosis Factor Receptor Superfamily Member 11b (TNFRSF11B) gene, coding for osteoprotegerin. This study identifies ARHGEF3 and RHHOA as potential regulators genes that act in bone metabolism and can be used as targets in specific therapies for osteoporosis. 34 Sun et al. 35 studied the cannabinoid receptor (CNR2) on bone marrow-derived mesenchymal stem cells (BM-MSC). The study was conducted using knockdown of CNR2 by siRNA. Inactivation of the CNR2 receptor reduces the activity of alkaline phosphatase (ALP), inhibits the expression of osteogenic genes and induces a deposition of calcium in the extracellular matrix. Furthermore, bone marrow samples showed that the expression of CNR2 is much lower in patients with osteoporosis than healthy donors: CNR2 deficiency may be related to osteoporosis. In the bone marrow samples examined, the expression of CNR2 is much lower in patients with osteoporosis than healthy donors, thus raising the possibility that osteoporosis can be related to a lack of CNR2. 35 Tong et al. 36 used blood mononuclear cells (MNCs), as they are directly involved in osteoclastogenesis and osteoporosis. Through a specific siRNA against Differentiation Antagonizing Non-Protein Coding RNA(DANCR), they showed a reduction of interleukin 6 (IL6) and tumor necrosis factor alpha (TNF- α ). DANCR was therefore a regulator of the osteoblastic activity. Its inhibition induced greater osteoblastic activity, shifting the balance against osteoclastic activity thus favoring bone production and mineralization. As DANCR is overexpressed in osteoporosis, DANCR can be a target against osteoporosis. 36 Starting from the evidence of bone abnormalities and osteoporosis in patients with nevoid basal cell carcinoma syndrome (NBCCS), Hong et al. 37 wanted to identify a gene that could cause these effects to use targeted gene therapy in specific patients to safeguard them from the risk of osteoporosis. The identified gene, Protein patched homolog 1(PTCH1), was studied by specific siRNA. The downregulation of PTCH1 is associated with a reduction in Secreted Protein Acidic and Cysteine Rich (SPARC) expression, with a reduction in ossification. PTCH1 may be a possible target in the therapy against osteoporosis in specific patients. 37 WNT/ β -catenin signaling pathway decreases bone formation by reducing osteoblast differentiation. 38 , 39 Many investigations have studied the differentiation of hMSCs, with an inverse relationship between adipocytic and osteocytic development. Therefore, different signaling pathways induce MSC towards osteogenic or adipocytic differentiation. 40 Wang et al. 41 investigated adipogenic differentiation of hMSCs by specific siRNA for insulin receptor substance 2 (IRS2). The expression of IRS2 was increased during adipogenic differentiation, but, by inhibiting it with specific siRNA, such adipogenic differentiation was inhibited. The balance between osteogenic and adipogenic differentiation of hMSCs is altered in pathologies such as osteoporosis. Such studies may have a therapeutic value to produce drugs which block IRS2, increasing pro-osteogenic differentiation. 41 Pucci et al. 42 demonstrated that patients with OP exhibited degeneration of muscle fibers with an overexpression of Clusterin (CLU), correlating to high levels of IL6 and acetylation histone H4 of myoblasts. In the muscle tissues of osteoporotic patients, the muscle fibers were intact and CLU levels were low. Using specific siRNAs against CLU, inhibition of CLU restored of the ability of proliferative myoblasts and repaired muscle tissue damage. CLU could therefore be considered a potential therapeutic target in OP patients. 42 Zhang et al. 43 used specific siRNAs to validate data obtained through the Multiscale Embedded Gene Co-Expression Network Analysis (MEGENA) method that allows to obtain sequence of genes that are involved in the pathogenesis of osteoporosis. This allowed to identify some genes, such as transforming growth factor beta receptor 1 (TGFBR1) and transforming growth factor beta receptor 2 (TGFBR2), involved in the differentiation and recruitment of osteoclasts. This study opens up new perspectives to use siRNA to control more elaborate and large-scale pathogenetic pathways. 43
Discussion Osteoporosis produces serious structural damage to bones, increases the risk of fractures, and produces deformities that can lead to bed rest and increased mortality. 1 , 44 , 45 Osteoporosis fractures arise from multifactorial alteration of the micro-architecture of bone. 5 , 24 , 46 , 47 Hormonal factors are involved. Indeed, both sexes lose bone mass during life, but after menopause women lose bone much faster and are more prone to fragility fractures. Other factors are cellular, connected to imbalances between osteoclasts and osteoblasts. Finally, calcium and vitamin D play an important role. 7 , 26 , 48–50 Although fractures are often the first and most striking event of this pathology, such patients have developed osteoporosis long before the fracture event. 1 , 51 , 52 Authors have performed studies on human cells, mesenchymal stem cells, Bone Marrow Mesenchymal Progenitors, osteoblasts, osteoclasts and myoblasts to investigate the various metabolic pathways and identify the molecular targets on which it may be possible to intervene. 53 The Current management for OP is based on antiresorptive drugs, including calcitonin, oestrogens, bisphosphonates, and bone anabolic drugs, including teriparatide 1 , 5 , 8 ( Figure 3 ). OP patients exhibit poor drug taking compliance. The drugs often have serious side effects and unpredictable efficacy. Among the side effects, gastrointestinal disorders are common, and the most serious, such as osteonecrosis of the mandible, occur with bisphosphonate therapy. Long term oral bisphosphonate therapy increases the risk of atypical fractures and the incidence of esophageal cancer. Therefore, treatment with bisphosphonates for no longer than five years is recommended. In 2010, denosumab, a monoclonal antibody targeting the receptor activator of nuclear factor kappa ligand (RANKL), was introduced. New therapeutic targets through the use of siRNAs can be conceived. 1 , 5 Epidermal growth factor receptor (EGFR) binds to epidermal growth factor (EGF) and also to transforming growth factor α (TGF α ), leading to activation of the receptor which homodimerizes with a family of proteins including human epidermal growth factor receptor 2 (ERBB- 2), human epidermal growth factor receptor 3 (ERBB-3) and human epidermal growth factor receptor 4 (ERBB-4). 54 This type of activation induces activity of tyrosine kinase domains, resulting in phosphorylation and recruitment of proteins such as Son of sevenless (SOS) which in turn activate Rat sarcoma virus (RAS). 55 , 56 RAS is able to activate the mitogen-activated protein kinase (MAPK) responsible for the cellular differentiation of osteoclasts and osteoblasts in OP. 55 Another important molecule is TGFBR2, which codes for transforming growth factor beta (TGFB), a serine/threonine protein kinase. This gene determines the phosphorylation of proteins in the cell nucleus which leads to an increase in the proliferation of osteocytes and osteoblasts. 57 , 58 Insulin-like growth factors (IGF) is a peptide hormone with anabolic properties produced by the liver and by differentiated chondroblasts. IGFs, structurally similar to insulin and responsible for anabolic activities, stimulate the synthesis of aggrecan, type VI and IX collagen and binding proteins for cell proliferation in bone, determining both the quality and the conformation of the bone. 59–62
Conclusion Many pathologies seem multifactorial or simply related to age. I In reality, there are always molecular and cellular imbalances at the basis of these conditions. Unfortunately, management of osteoporosis start too late, only when the pathology is already manifest. Through siRNAs, it is possible to target the molecular bases that lead to OP, to then direct a specific therapy to prevent the actual condition. Various authors have used siRNAs, for example, to identify the target molecules, or as a therapeutic target, or to highlight the efficacy of a given drug. Studies on human cells in vitro give us hope for possible future drugs that can combat OP at its origin, without the side effects of current therapies. Appropriate studies are necessary to be able to translate these elegant laboratory studies so that they can be introduced into routine clinical practice.
Abstract Background Osteoporosis results in reduced bone mass and consequent bone fragility. Small interfering RNAs (siRNAs) can be used for therapeutic purposes, as molecular targets or as useful markers to test new therapies. Sources of data A systematic search of different databases to May 2023 was performed to define the role of siRNAs in osteoporosis therapy. Fourteen suitable studies were identified. Areas of agreement SiRNAs may be useful in studying metabolic processes in osteoporosis and identify possible therapeutic targets for novel drug therapies. Areas of controversy The metabolic processes of osteoporosis are regulated by many genes and cytokines that can be targeted by siRNAs. However, it is not easy to predict whether the in vitro responses of the studied siRNAs and drugs are applicable in vivo . Growing points Metabolic processes can be affected by the effect of gene dysregulation mediated by siRNAs on various growth factors. Areas timely for developing research Despite the predictability of pharmacological response of siRNA in vitro , similar responses cannot be expected in vivo .
Author contributions Giuseppe Gargano Conceptualization, Investigation, Methodology, Validation, Writing—original draft), Giovanni Asparago (Formal analysis, Investigation, Software), Filippo Spiezia (Data curation, Methodology, Writing—original draft), Francesco Oliva (Conceptualization, Validation, Writing—review & editing), Nicola Maffulli (Conceptualization, Data curation, Formal analysis, Supervision, Validation, Writing—original draft, Writing—review & editing). Conflict of interest Authors declare that they have no conflict of interest. Data availability The links or identifiers required for the data are present in the manuscript as described.
CC BY
no
2024-01-16 23:43:51
Br Med Bull. 2023 Sep 6; 148(1):58-69
oa_package/08/44/PMC10788844.tar.gz
PMC10788845
37496207
Introduction Frozen shoulder, sometimes referred to as adhesive capsulitis, is an insidious musculoskeletal condition that affects the glenohumeral joint. It is characterized by the formation of scar tissue, adhesions and capsular thickening within the shoulder. 1 , 2 Frozen shoulder has a reported prevalence of 2–5% in the general population, rising to 20% in individuals with diabetes mellitus. 3 Typically, patients present with excruciating pain and reduced passive and active range of motion (ROM) of the glenohumeral joint. Symptoms generally last from 6 months to 2 years. Most patients demonstrate spontaneous resolution of symptoms, and thus conservative management is commonly advised. 4 Currently, there exists a plethora of conservative management options for patients with frozen shoulder, including analgesia, corticosteroids (oral or intra-articular), physiotherapy, acupuncture, manipulation, suprascapular nerve blockade and hydrodilatation. 5 First proposed in 1965 by Andren and Lundberg, intra-articular hydrodilatation attempts to expand the joint space through the sheer hydraulic pressure exerted by the injectate. 6 However, given the marked disability caused by frozen shoulder, some patients may forgo the less invasive hydrodilatation and instead opt for more invasive surgery. This is a possible consequence of the perceived slow nature of symptom improvement with conservative approaches. 7 Additionally, there remains ambiguity surrounding the effectiveness of hydrodilatation as a treatment method. 8 Gam et al. compared hydrodilatation to corticosteroid injections alone and identified improvements in shoulder pain and ROM. 9 However, the results of this study were limited given the high risk of bias. On the contrary, Corbiel et al. and Jacobs et al. found no significant differences when assessing the same treatment modalities. 10 , 11 Furthermore, many studies have examined the efficacy of hydrodilatation amid other treatment options, and thus its specific effects have not always been assessed. 12 The effectiveness of hydrodilatation may well be short-lived, 10 , 11 as no large study has addressed this particular aspect of the intervention. 10 Hydrodilatation may potentially lower the prevalence of long-term impairments; however, it remains challenging to determine the number of patients suffering from residual deficiencies. 10 Most recently, Saltychev et al. demonstrated statistically significant symptomatic improvements with the use of hydrodilatation when assessing its effectiveness in the management of frozen shoulder. However, this effect was deemed not to be clinically relevant. 5 Thus, amid the incongruent results in the literature, more research is warranted. Nonetheless, hydrodilatation is recommended as part of the patient care pathway co-produced by the British Elbow and Shoulder Society and British Orthopaedic Association. 12 This review evaluates the current evidence on the efficacy of hydrodilatation for frozen shoulder. This study builds on the previous systematic review by Saltychev et al., 5 through the inclusion of recently published randomized controlled trials and prospective and retrospective studies.
Methods Study design The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 were used to conduct and report this review. 13 Our Population, Intervention, Comparison and Outcome framework was as follows: Participants: adults, with frozen or painfully stiff shoulders, suffering from discomfort that limits both active and passive glenohumeral joint motions. Intervention: glenohumeral joint hydrodilatation (hydrodistension and distension). Comparison: intra-articular corticosteroid injections, a placebo, sham, other interventions or no therapy. Outcome: all clinically relevant outcomes. Primary: assessment of pain and function or disability. Secondary: ROM, complications and any others. Search strategy Computer searches were conducted on PubMed, Embase, Scopus, Cochrane Central, Web of Science and CINAHL electronic databases from inception to June 18, 2023 for articles assessing hydrodilatation in patients with frozen shoulder. The goal was to increase the search strategy’s sensitivity to increase the likelihood that all relevant studies would be obtained. 14 , 15 Our search clause for the PubMed search was ‘(shoulder OR rotator OR adhesive capsulitis) AND (hydrodilatat * OR distension).’ When conducting searches on the different databases, similar clauses were utilized. We adjusted the search strategy from a previous systematic review 5 to accommodate our own needs. The search was restricted to humans only, and the reference management software EndNote was used to organize its results. The relevance of the cited studies’ references was also examined. A step-by-step process, which involved team meetings to improve the search strategy and settle disagreements, was utilized to ensure that the searches were producing relevant studies. 16 Study screening All references were downloaded from the Rayyan reference management software, and duplicates were removed before screening the title and abstract. The full texts of the remaining articles were examined after two authors (DP and RH) independently assessed the titles and abstracts. A consensus meeting between the two authors was organized to settle disputes that arose during research screening and selection. If no consensus could be reached, the senior author (NM) was contacted for a final decision. Study selection Only peer-reviewed journals were considered. There were limited restrictions on the study design within the selection criteria, which increased the likelihood of identifying pertinent studies. Thus, randomized controlled trials, prospective and retrospective comparative studies and case series were included. Level I–IV studies, according to the Oxford Centre for Evidence Based Medicine, were identified and included in our analysis. The hydrodilatation technique and follow-up period had to be well described in all included studies, which had to use at least one validated clinical outcome score or assess change in ROM. Studies needed to be published in English, and had to have recruited at least 10 adult participants. Exclusion criteria were reviews, case reports, experiments on animals, cadavers or in vitro and letters to editors. We also excluded articles with no information on hydrodilatation intervention, diagnosis, follow-up, clinical examination and statistical analysis. To prevent bias, all authors read, evaluated and discussed the included and excluded studies and the relative list of references. The senior investigator (NM) made the final decision if there was a disagreement among the investigators on the inclusion and exclusion criteria. Data extraction Data extracted from each study included the following: author name, study year; study design (level of evidence); number of patients (shoulders); mean age (range) (years); diabetes mellitus diagnosis; Coleman Methodology Score (CMS); imaging assessment; duration of symptoms (average) (months); outcome measures (time intervals); regimen and modification of the distension arms; comparative intervention arm; hydrodilatation technique; and complications. Data were entered in a custom Excel spreadsheet by all the investigators independently. A standardized form, based on the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0, Chapter 7, was used for data extraction for the meta-analysis. 17 Discussions with the senior author (NM) allowed the resolution of any discrepancies. Quality assessment The methodological quality was assessed according to the CMS. 18 Modifications of the CMS were made to make it pertinent for the systematic review of frozen shoulder ( Table 1 ). Each study was scored by two reviewers (DP and RH) independently and in duplicate for each of the criteria adopted to give a total CMS between 0 and 100. A study design that eliminates the impact of chance, bias and confounding variables would receive a score of 100. Disagreements were resolved by discussion. The CMS is divided into sections, each of which is based on a component of the CONSORT statement (for randomized controlled trials) with modifications to accommodate various study designs. Statistical analysis The meta-analysis was performed using Review Manager, version 5.4 (The Cochrane Collaboration). The I 2 statistic was used to test for statistical heterogeneity and was assessed as follows: 0% < I 2 < 25%, low heterogeneity; 25% < I 2 < 50%, moderate heterogeneity; and I 2 > 50%, high heterogeneity. 18 This effectively describes the percentage of variation across studies originating more from heterogeneity than from chance. We used the random-effects model because outcome measurements were taken at different time points, and the different phases of frozen shoulder increases the risk of heterogeneity. Data for quantitative analysis were extracted at two-time points: the first follow-up post-intervention and the last follow-up post-intervention. The Egger’s test and a funnel plot were used to evaluate the publication bias. When just the interquartile range (IQR) was provided, IQR/1.35 was used to calculate the standard deviation (SD). According to the Cochrane Handbook for Systematic Review of Interventions Version 5.1.0, Chapter 7, the mean was presumed to be the same as the median when only the median was given. 17 SD was computed as (max-min)/4 when only the range was given. Cohen’s d —a standardized mean difference (SMD) in variable change between groups—was used to calculate the effect sizes. Variables were measured by the SMD with 95% confidence intervals (95% Cis). Data synthesis was initiated for each included study by combining pertinent reported outcomes stratified by pain, disability and ROM at pre-determined time points (earliest and latest follow-ups). In all analyses, a P -value < 0.05 was considered statistically significant. Sensitivity analysis was conducted to evaluate the reliability of the effects. One study was eliminated at that time, and studies with very heterogeneous findings were also eliminated.
Results Study identification and selection Our initial search yielded 1234 articles, with a total of 452 left following the removal of duplicates. We then screened the titles and abstracts of the remaining articles and retained 54 articles for full-text evaluation, which resulted in 39 studies ( Fig. 1 ). Demographics A total of 2623 participants and 2632 shoulders were included. The number of participants recruited in each study varied from 22 to 250. Data on the incidence of diabetes were reported in 16 included studies. Of the 1187 patients, 224 (18%) were diabetic ( Table 2 ). Study identification and selection A total of 20 studies (51.3%) used imaging, such as ultrasound or magnetic resonance imaging, to confirm the diagnosis of frozen shoulder. The hydrodilatation procedures were performed under ultrasound or fluoroscopic guidance. In 21 studies, the hydrodilatation was administered through the posterior approach, in 13 through the anterior approach, and in 1 using both anterior and posterior approaches. Inclusion and exclusion criteria were overall quite similar across most articles. The volume of mixture injected for hydrodilatation ranged from 9 to 100 mL. Typically, the hydrodilatation mixture consisted of corticosteroids, local anaesthetic and normal saline solution, and only one study used a combination of hyaluronic acid and lidocaine. 45 Intra-articular corticosteroid injections were the most commonly utilized reference therapy. Arthroscopic capsular release (ACR), manipulation under anaesthesia (MUA), placebo (arthrogram), general physical therapy and treatment as usual (i.e. physical therapy and oral medication) were also used ( Table 3 ). Outcomes measurements The included studies used several outcome measures. The visual analogue score (VAS) was used in 21 articles; the Shoulder and Pain Disability Index (SPADI) in 18 studies; the Oxford Shoulder Score (OSS) in seven studies; the Disabilities of the Arm, Shoulder and Hand (DASH) in three studies; and the Constant-Murley score in five studies. Quality assessment The average CMS score was 63, indicating that the overall quality of the included studies was fair. Table 2 provides the actual values of the CMS. Inter-rater reliability was calculated between the mean values of CMS calculated by two authors (DP and RH). Cohen’s kappa coefficient ( k ) was 0.779661, indicating substantial agreement for the first round of methodological quality assessment. The intra-rater reliability was k = 0.864111 and 0.915309 for DP and RH respectively, indicating almost perfect agreement. Complications The included studies reported transient complications such as flushing, local depigmentation of the skin, loss of sensory and motor control in the affected arm, loss of sleep, nausea, dizziness, 31 , 35 , 36 , 40 , 50 , 55 hypotensive syncope 37 and after-injection pain. 9 , 24 , 26 , 34 , 36 In one patient, hydrodilatation was abandoned from unbearable pain during the procedure. 28 Two studies reported one patient each with a glenohumeral joint infection. 44 , 50 Meta-analysis of the studies evaluating the effect of capsular distension versus corticosteroid alone There was no significant benefit of intra-articular corticosteroid injection alone compared with capsular distension at the first follow-up post-intervention (SMD, 0.09; 95% CI, −0.27 to 0.45) and at the last follow-up post-intervention (SMD, −0.02; 95% CI, −0.21 to 0.17) when pain scores were evaluated ( Fig. 2 ). In terms of disability, hydrodilatation was favoured over intra-articular corticosteroid injection at first follow-up post-intervention (SMD, 0.24; 95% CI, 0.05–0.43). However, this was not observed at the last follow-up post-intervention (SMD, −0.01; 95% CI, −0.23 to 0.22) ( Fig. 3 ). Regarding improvements in passive shoulder ROM, hydrodilatation prevailed over intra-articular corticosteroid injections when assessing passive external rotation at the earliest (SMD, 0.43; 95% CI 0.12–0.74) and at the latest follow-up post-intervention (SMD, 0.68; 95% CI, 0.21–1.16) ( Fig. 4 ). Moreover, there were no statistically significant differences in passive forward flexion, abduction or internal rotation at both time points ( Figs 5 – Fig. 7 ). The Cochrane Handbook Chapter 10 advises that tests for funnel plot asymmetry should only be used if a minimum of 10 studies are included in the meta-analysis. As this threshold was not reached, funnel plot asymmetry was not calculated. 17 Quantitative analysis of the studies not included in the meta-analysis The pooled effect sizes of studies not included in the meta-analysis where intra-articular corticosteroid was not used as a control are shown in forest plots ( Figs 8 – 10 ). All comparisons were not statistically significant when evaluating the pooled effect size. Park et al. 42 showed large effect sizes at the outcome measurements for pain, disability and external rotation for the earliest follow-ups post-intervention. In that study, a combination of intensive mobilization after hydrodilatation was compared with general physiotherapy.
Discussion The present systematic review investigated the effectiveness of hydrodilatation for frozen shoulder in terms of pain, shoulder disability and ROM, which were considered proxy indicators of therapeutic effects. Hydrodilatation demonstrated transient improvements in shoulder disability during the early follow-up periods. Additionally, significant improvements in passive external rotation were observed at the earliest and latest follow-ups. When comparing the pooled effects of hydrodilatation to other reference treatments, such as MUA, ACR and general physiotherapy, no significant differences were identified. Contracture of the coracohumeral ligament is considered the predominant pathology in frozen shoulder. During image-guided hydrodilatation, leakage of contrast agents into the subscapularis bursa is often a sign of capsular rupture. 11 This occurrence suggests that, in comparison with the posterior capsule, the anterior joint capsule is less resistant to the stretching forces of the injectate, which may account for the improvements in passive external rotation. However, more research is required to confirm this hypothesis. Various epidemiological studies have identified a link between diabetes mellitus and frozen shoulder. 57–59 Indeed, this systematic review included a total of 224 individuals (18%) with diabetes mellitus. In a previous Cochrane review, Buchbinder et al. identified one study comparing hydrodilatation versus placebo, and found improvements in shoulder pain and ROM. However, there was insufficient evidence to suggest that hydrodilatation prevailed over intra-articular corticosteroid injections, which are well reported for the treatment of a frozen shoulder. 10 The combination of the two treatments may induce a synergistic effect, the former abating glenohumeral joint inflammation and the latter facilitating joint cavity expansion. 11 Most of the evidence in the present systematic review is derived from comparisons between hydrodilatation versus intra-articular corticosteroid injections alone. The results of this review support previous studies, which also found statistically significant but transient improvements in shoulder disability and passive external rotation. 11 Thus, clinicians must balance the immediate improvements in disability and external rotation with the possible negative consequences of hydrodilatation, such as the acute pain following joint capsular rupture. However, we did also identify improvements in passive external rotation at the latest follow-ups, contrary to the findings of previous studies. 11 Furthermore, mixed results were evidenced when comparing the efficacy of hydrodilatation and MUA. Park et al. found statistically significant improvements in pain, disability and external rotation for MUA when compared with hydrodilatation. 42 On the other hand, Quraishi et al. identified that hydrodilatation provided statistically significant improvements in pain compared with MUA in the earliest follow-up periods. 51 However, there were no significant differences in pain scores at late follow-ups and in terms of disability outcome measures. Therefore, MUA should be considered secondary to hydrodilatation given its uncertainty regarding its superiority. Also, MUA is a relatively expensive inpatient procedure, whereas hydrodilatation is an outpatient treatment which does not require anaesthesia. Other recognized drawbacks of MUA include humeral fractures, isolated infraspinatus paralysis, brachial plexus traction injuries and rotator cuff tears. 47 , 49 , 51
Conclusion Hydrodilatation may provide early improvements of disability in addition to short- and long-term improvements in passive external rotation in frozen shoulder. However, there is comparable effectiveness of glenohumeral joint hydrodilatation and intra-articular corticosteroid injection when assessing most long-term outcomes. Hydrodilatation is a promising alternative treatment to the more expensive surgery. Clinicians must manage patient expectations appropriately given the wide number of reported complications. Finally, well-designed, appropriately powered RCTs, with a low risk of bias, are required to confirm the relevance and validity of hydrodilatation in the management of frozen shoulder.
Daryl Poku and Rifat Hassan are co-first authors Abstract Introduction It is unclear whether hydrodilatation is beneficial in the management of frozen shoulder compared with other common conservative management modalities. This systematic review evaluates the efficacy of hydrodilatation for the management of frozen shoulder. Sources of data A systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. An extensive search of PubMed, Embase, Scopus, Cochrane Central, Web of Science and CINAHL databases using multiple keyword combinations of ‘shoulder’, ‘rotator’, ‘adhesive capsulitis’, ‘hydrodilatat * ’, ‘distension’ since inception of the databases to June 2023 was implemented. Areas of agreement Hydrodilatation leads to at least transient more marked improvements in shoulder disability and passive external rotation compared with intra-articular corticosteroid injections. Areas of controversy Hydrodilatation improves passive external rotation in the longer term. Moreover, hydrodilatation may be a preferable option over manipulation under anaesthesia, given its lower cost and better patient convenience. Growing points Intensive mobilization after hydrodilatation is a promising adjuvant treatment option for patients suffering from a frozen shoulder. Areas timely for developing research Although current evidence suggests that hydrodilatation provides a transient improvement in disability in patients with frozen shoulder, its clinical relevance remains unclear. Further research is necessary to establish its role in the management of the condition.
Limitations This investigation presents several limitations. Firstly, as frozen shoulder of all durations was examined as a whole, we could not determine the best way to treat each of the stages of frozen shoulder. Secondly, both within and across trials, different volumes of hydrodilatation fluid were utilized. As a result, we were unable to assess the association between injectate volume and its clinical efficacy. Therefore, to standardize the delivery of hydrodilatation in future studies, researchers and clinicians should adhere to recently published guidelines. 60 Thirdly, our secondary outcomes included several shoulder ROM components that might lead to erroneous positive results. As a result, any favourable secondary outcomes should be carefully assessed and supported by further research. Fifthly, publication bias was not assessed, as we had less than ten studies in the meta-analysis. Sixthly, our meta-analysis software (Review Manager 5.4) was not able to differentiate the specific outcomes measures and comparative treatments on the forest plots for the studies by Park (2014) and Yoon (2016) ( Figs 8 – 10 ). This made it impossible to visually distinguish which comparative treatment demonstrated superior efficacy. Furthermore, only a relatively few outcomes, namely changes in pain intensity, disability and improvements in ROM, were used to assess the efficacy of hydrodilatation. As a result, several potentially important outcomes were not considered, including patient satisfaction and incidence of complications. Also, the role of concurrent physiotherapy on the effects of hydrodilatation was not measured since patients’ post-intervention exercise routines differed among the included trials and were not described in sufficient detail. Therefore, future research should include standardized rehabilitation protocols, and ensure that the regimen is adequately described. 61 Finally, doubts regarding the accuracy of injections should be considered as we did not differentiate the study’s results based on image- versus anatomical landmark-guided injections.
CRediT author statement Daryl Poku (Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing—original draft), Rifat Hassan (Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing—original draft), Filippo Migliorini (Methodology, Software, Validation, Visualization), Nicola Maffulli (Methodology, Software, Validation, Visualization, Writing—review and editing). Conflict of Interest Statement The authors declare that they have no conflict of interest. Funding No external source of funding was used. Data Availability All the data underlying the submission in the manuscript has been reported.
CC BY
no
2024-01-16 23:43:51
Br Med Bull. 2023 Jul 26; 147(1):121-147
oa_package/d3/3b/PMC10788845.tar.gz
PMC10788851
38039513
Introduction Nucleophosmin ( NPM1 ) mutations ( NPM1 mut ) are present in approximately one-third of adults with acute myeloid leukemia (AML), 1 and despite a generally favorable prognosis, a significant proportion (26%-44%) will relapse. 2 , 3 , 4 Importantly, NPM1 mut provides a stable target for monitoring measurable residual disease (MRD) using molecular methods, 5 and patients with rising MRD levels after treatment (now called MRD relapse 6 ) inevitably progress to frank relapse without intervention. 7 , 8 Although transplantation may play an important role for eligible patients, proceeding to transplant with high levels of MRD appears to be associated with poor outcomes. 9 , 10 , 11 There are currently limited data regarding interventions for molecular failure, and treatment options are not well defined. In the RELAZA2 trial, 17 of 32 patients (55%) with NPM1 mut AML treated with azacitidine at MRD relapse achieved MRD negativity. 12 More recently, 2 retrospective studies using venetoclax and azacitidine or low-dose cytarabine (LDAC) in this situation reported MRD negativity in 11 of 12 (92%) and 9 of 11 patients (82%), 13 , 14 and a number of patients in both studies subsequently received an allogeneic transplant. Because of the convenience and low toxicity of these regimens compared with salvage chemotherapy (SC) and the lack of alternatives, off-label use of venetoclax combinations in this situation has become common in several European countries. Here, we present outcomes in a large international real-world cohort of patients with NPM1 mut AML and MRD relapse or persistence treated with venetoclax combinations.
Methods Patients Patients with AML with an NPM1 mutation (of any type) who had received venetoclax combinations for molecular failure were retrospectively identified from 20 hospitals in the United Kingdom, Sweden, Australia, Spain, Denmark, and Ireland between May 2017 and October 2022. The inclusion criteria were as follows. Patients had to be aged ≥16 years with a diagnosis of AML according to World Health Organization 2016 15 with an NPM1 mutation at diagnosis. They had to have received anthracycline-based induction chemotherapy as firstline ( supplemental Figure 1 ) and had molecular failure diagnosed in 1 of 5 central reference laboratories, which was defined as follows. Patients either had MRD relapse as defined by European LeukaemiaNet 2022 (ie, either conversion from MRD negativity to positivity confirmed on a second sample, molecular relapse; or a confirmed 1-log 10 rise in transcript expression, molecular progression) 6 , 16 or had persistent MRD at the end of treatment (EOT; ie, molecular persistence) and at least 1 risk factor for progression ( FLT3 -ITD or EOT NPM1 mut MRD < 4.4-log reduction). 17 Patients had to have at least 1 posttreatment bone marrow sample evaluable for MRD response assessment by reverse transcription quantitative polymerase chain reaction. Patients showing hematologic or extramedullary relapse before treatment and those treated with high intensity venetoclax–based regimens were excluded from this study. Twelve patients from a previous publication were also included in this cohort. 13 FLT3 mutational status was assessed at diagnosis in accredited diagnostic laboratories. This study was approved by local ethics committees in accordance with the Declaration of Helsinki. Informed consent was waived for this retrospective study. Treatment Patients were treated under institutional protocols using off-label venetoclax (100-600 mg taken orally daily for 7-28 days) in combination with azacitidine (75-100 mg/m 2 subcutaneous [SC] daily for 5-7 days), LDAC (20 mg/m 2 SC daily for 7-10 days), or decitabine (20 mg/m 2 IV daily for 5 days). Patients proceeded to allogeneic stem cell transplantation or ceased treatment at the discretion of the treating physician. MRD assessment Patients were routinely monitored by reverse transcription quantitative polymerase chain reaction for mutant NPM1 transcripts using bone marrow aspirate samples (except for 1 patient monitored by DNA assay due to a rare NPM1 mutation). Complementary DNA was prepared from total RNA, and NPM1 -mutated transcripts were amplified with mutation-specific primers as previously described. 8 , 18 NPM1 mutant transcript levels were compared with the expression of the ABL1 reference gene. Quantitation was performed with reference to a standard curve of serially diluted plasmid standards (Qiagen). Assay sensitivity varied between patients and samples but was generally in the range of 1:10 –5 to 1:10 –6 . No data on multiparametric flow cytometry were obtained for this study. Response definitions The following response definitions were used. MRD negativity required amplification of NPM1 mutated transcripts in fewer than 2 replicates out of 3, using a cycle threshold (Ct) cutoff of 40, in a sample with adequate sensitivity indicated by a median ABL Ct <26.5. MRD reduction required a reduction in NPM1 mutated transcripts of ≥1 log 10 compared with pretreatment levels. MRD progression required an increase in NPM1 mutated transcripts of ≥1 log 10. Morphological relapse required the reappearance of >5% blasts in blood or bone marrow or extramedullary disease. Patients not meeting any of these criteria were designated to have stable disease. The overall response rate included patients who met the criteria for either MRD negativity or MRD reduction. Outcome measures Overall survival (OS) was measured from day 1 of initiation of treatment to the date of death from any cause. Event-free survival (EFS) was measured from day 1 of treatment to the date of treatment failure, molecular or hematologic relapse, or death from any cause, whichever occurred first. Molecular relapse-free survival (RFS) was calculated only for patients achieving molecular response and defined as the time from the date of achievement of response until the date of molecular or hematologic relapse or death from any cause. In patients who ceased treatment, it was measured from the date of treatment cessation until the date of molecular or hematologic relapse or death from any cause. Patients not known to have relapsed or died at last follow-up were censored on the date they were last known to be alive. Statistical analysis Quantitative variables were compared using Mann-Whitney U test or Kruskal-Wallis test and categorical variables using χ 2 test. A 2-sided P value <.05 was considered statistically significant. The Kaplan-Meier method was used to assess OS, EFS, and RFS. A time-dependent regression analysis was performed to evaluate the effect of allogeneic hematopoietic stem cell transplantation (HSCT), represented using the Simon-Makuch method. These analyses were done using tmerge() function from R survival package (v. 3.5-5) and RcmdrPlugin.EZR R package (v. 1.61). Receiver operating characteristic curve analysis was used to determine the optimal cutoff value (Youden Index) of MRD that best correlated with response.
Results We identified 79 patients (median age, 62; range, 18-81 years) meeting the inclusion criteria. Thirty-one of 79 patients (39%) had a FLT3 mutation at diagnosis, of whom 22 (28%) had FLT3 -ITD. Seven of 79 patients (9%) had received a prior allograft ( Table 1 ). The type of molecular failure was MRD relapse in 52 patients (66%, comprising 43 patients with molecular relapse and 9 with molecular progression) and molecular persistence in 27 (34%). Among the 27 patients treated for MRD persistence, 19 of 27 (70%) had only 1 risk factor for molecular progression (EOT NPM1 mut MRD <4.4-log reduction and were FLT3 -ITD wild type), and 8 (30%) also had FLT3 -ITD mutation. The median time from diagnosis of AML to molecular failure was 11 (range, 1-98) months, and the median level of MRD before treatment was 378 NPM1 copies per 10 5 ABL (range, 0.27-1 410 000; Table 1 , Figure 1 A). Patients were treated under institutional protocols using off-label venetoclax in combination with azacitidine (44/79 patients [56%]), LDAC (34/79 patients [43%]), or decitabine (1 patient). Azole antifungal prophylaxis was used in 48 patients (66%) with appropriate venetoclax dose reductions when indicated ( Table 1 ). Patients received a median of 3 cycles (range, 1-25), with a median time between cycles of 32 days. MRD response The median time from initiation of therapy to best MRD response was 56 days (range, 14-724). Three responding patients had an initial reduction in MRD but only achieved MRD negativity after >12 months of treatment. Overall, MRD negativity was achieved in 56 of 79 patients (71%), and a molecular response (≥1-log reduction in MRD level) was observed in a further 10 of 79 (13%) for an overall molecular response rate of 84%. MRD negativity was achieved in 34 of 43 patients (79%) treated for molecular relapse, 17 of 27 (63%) of those treated for molecular persistence, and 5 of 9 (56%) of those treated for molecular progression. Molecular response was achieved in 39 (91%), 21 (78%), and 6 (67%) patients, respectively ( Table 1 ). Three patients had been previously exposed to venetoclax combinations, 2 of them reached MRD negativity, and 1 progressed despite treatment. Similar response rates were found irrespective of the combination regimen, with molecular responses observed in 84% of patients treated with azacitidine, 82% with LDAC, and 100% with decitabine ( Figure 1 B). Patients who had received a previous allogeneic HSCT had similar rates of response (5/7 [71%]) compared with those who had not (61/72 [85%]). A pretreatment cutoff value of 365 copies of NPM1 /10 5 ABL at relapse was determined to be the most discriminative predictor of response ( supplemental Figure 3 ). Patients with >365 copies of NPM1 /10 5 ABL were less likely to achieve a response (MRD negativity or reduction) with venetoclax combinations (odds ratio, 4.00; 95% IC, 1.08-15.8). Despite a lower response rate in patients with ≥365 copies of NPM1 /10 5 ABL before treatment, there were no differences in OS or EFS ( supplemental Figure 4 ). Patients with MRD levels below and above the stablished cutoff point of 365 NPM1 /10 5 ABL copies proceeded to HSCT at similar rates (38.5% vs 50%, respectively; P = .31) (data not shown). Comutational landscape Next-generation sequencing data at diagnosis were available for 73 of 79 patients. In this cohort, the most common co-occurring mutations were DNMT3A (34/79 [43%]), FLT3 -ITD (22/79 [28%]), and IDH2 (17/73 [23%]). Patients with FLT3 -ITD mutations at diagnosis showed a lower response rate to venetoclax combinations (64% vs 91% in wild type; P = .005). We did not observe any differences in response rate or outcome according to DNMT3A , IDH1/2 , or RAS pathway mutational status at diagnosis ( Table 2 ). Adverse events Grade 4 neutropenia and thrombopenia were reported in 52 (66%) and 21 patients (27%), respectively. Eighteen patients required unplanned hospitalization due to febrile neutropenia, and 2 patients were admitted to critical care during treatment; 1 of them due to severe acute respiratory syndrome coronavirus 2 infection ( supplemental Table 2 ). No deaths were reported during treatment. Outcomes With a median follow-up of 17 months (range, 2-64), 2-year OS was 67%, and 2-year EFS was 45%, with a median EFS of 16 months ( Figure 2 A-B). We found no differences in outcomes regardless of the treatment used or the type of MRD failure. The presence of FLT3 -ITD mutation at diagnosis was associated with inferior OS (hazard ratio [HR], 2.50; 95% confidence interval [CI], 1.06-5.86; P = .036) and EFS (HR, 1.87; 95% CI, 1.06-3.28; P = .03) ( Table 2 , Figure 2 C-D). Forty-four of 79 patients (56%) underwent allograft at a median time from diagnosis of molecular failure of 5.2 months (range, 1-13.5), including 41 of 44 (93%) without further therapy, of whom 25 of 41 (57%) were MRD negative before transplant. In these 41 patients, allogeneic transplant did not have an impact on OS (HR, 1.28; 95% CI, 0.52-3.16; P = .6) or EFS (HR, 0.81; 95% CI, 0.43-1.56; P = .5) compared with those who did not undergo transplantation ( Table 2 , Figure 3 A-B). Three patients underwent transplantation after subsequent treatment with FLAG-Ida-venetoclax (n = 1), 19 FLAG-Ida (n = 1), and gilteritinib (n = 1) due to lack of response. Among the 41 patients who proceeded to HSCT without any additional therapy, MRD negativity before HSCT did not have an impact on OS (median OS, not reached vs 21 months in MRD positive; P = .31) or EFS (median, 18 vs 12 months in MRD positive; P = .42) ( supplemental Figure 5 ). Cumulative incidence of relapse at 12 months after transplant was 28% ( supplemental Figure 6 ). Cessation of treatment Nineteen patients who achieved a molecular response (18 of whom who achieved MRD negativity) and did not proceed to transplant electively ceased treatment after a median of 10 cycles (range, 2-30). Two-year OS was 76% in the 18 patients who were MRD negative at the time of treatment cessation, and 2-year molecular RFS was 62% ( Figure 4 B-C). Of note, only 3 (16%) of these patients had a FLT3 -ITD mutation.
Discussion To our knowledge, this is the largest report to date evaluating the efficacy of low-intensity chemotherapy combined with venetoclax for NPM1 molecular failure. The efficacy of SC has been demonstrated before in this subset of patients; for example, in the NCRI AML17 trial, 27 patients with molecular relapse received SC, and 16 (59%) achieved MRD negativity. 9 In the CETLAM group cohort, 10 of 33 patients with molecular failure received SC (FLAG-IDA, HiDAC), and 80% achieved MRD negativity. 20 In the VALDAC study, patients received LDAC and venetoclax after MRD or oligoblastic relapse (defined as <15% bone marrow blasts); of those with NPM1 mut AML, 11 of 20 patients (55%) with MRD relapse, and 6 of 8 with oligoblastic relapse achieved MRD negativity. 21 Although venetoclax combinations have been reported to have particular efficacy in NPM1 mut AML, 22 the response rate (complete remission + complete remission with incomplete count recovery) for frank hematologic relapse was only 46% in a retrospective study. 23 Responses are similar with these combinations when patients relapse after HSCT, but in 1 report, 2 of 2 patients with NPM1 mut with molecular relapse had a sustainable remission. 24 Here, we report molecular complete remission rates of 56% to 79% (depending on the type of failure), and this is consistent with previous smaller studies in the molecular failure setting reporting MRD negativity in 82% to 92% of patients. 13 , 14 MRD negativity rates were similar to those reported with SC, despite the much higher toxicity and health care resource use associated with the latter. 9 , 20 We found a rapid response to venetoclax, with best response achieved in more than half of the patients before the third cycle, consistent with previous literature. 13 , 25 , 26 Three patients had an initial molecular response but only achieved MRD negativity after >12 months of treatment. Of note, this is consistent with a previous report in patients with newly diagnosed AML treated with firstline azacitidine-venetoclax, in which 21% of patients who achieved negative MRD by flow cytometry did so after >7 cycles. 27 A previous publication found that patients with NPM1 mut AML who have positive MRD at EOT have a heterogeneous evolution, with a 1-year EFS <50% in patients with failure to clear MRD below 4.4 log 10 from baseline and/or FLT3 -ITD mutation. 17 The benefit and optimal timing of treatment for these patients is not well determined, so, only those with ≥1 of these risk factors for progression were included in this cohort. Consistent with previous studies, we found worse OS and EFS in the presence of both risk factors, despite treatment with venetoclax combinations. Nonetheless, whether these patients benefit from therapy needs to be determined in prospective trials. FLT3 -ITD mutation, previously described as a marker of worse response to venetoclax, 22 , 28 , 29 was also associated with a lower response rate in our cohort. AML harboring K/NRAS mutations have shown an intermediate response to venetoclax combinations, 23 , 28 whereas patients with IDH mutations appear to have superior outcomes. 30 , 31 We did not find differences in responses or outcomes according to K/NRAS or IDH 1/2 mutational status in this cohort, although the limited patient numbers preclude any definite conclusions regarding these molecular subgroups. Allogeneic transplant did not result in improved OS or EFS in this cohort. Although the decision to proceed with HSCT and when to do so was individual, and both the cohort size and length of follow-up are relatively limited, these data raise the question of the potential benefit of HSCT in patients with molecular failure treated with venetoclax-based combinations. However, addressing this question will require a randomized study. In contrast to previous reports 9 , 10 for patients proceeding to HSCT, pretransplant MRD did not have an impact on outcome. This discrepancy may be related to the relatively small cohort and relatively short follow-up after HSCT. There were insufficient data available regarding conditioning intensity to evaluate the impact of this in patients with MRD positivity. 9 , 32 Eighteen patients who ceased treatment after achieving MRD negativity had good outcomes, with molecular RFS at 4 years of 62%. A previous report showed that patients treated with frontline venetoclax combinations who achieved MRD negativity had NPM1 or IDH2 mutations, and those who discontinued treatment after 12 months had a median RFS of 59 months. 33 Our results indicate that the option of treatment cessation has comparable outcomes after MRD relapse in NPM1 mut AML treated with venetoclax combinations. In this cohort, the rate of adverse events including hematologic toxicity was low, and the toxicity profile appeared more favorable than with frontline therapy with venetoclax-based regimens, due to a lower incidence of febrile neutropenia (23% vs 42%). 26 , 34 The main limitation of this study is its retrospective nature and patient recruitment, influenced by the availability of off-label venetoclax treatment, which may have induced a selection bias. Furthermore, being a multicenter cohort, the method used for MRD assessment, although standardized, may have introduced some differences. Given the diverse treatment strategies used in this retrospective study, ranging from a finite number of venetoclax-based courses to consolidation with an allogeneic transplant, the optimal consolidation strategy in patients achieving a molecular complete remission is uncertain and should be addressed in future prospective studies. A phase 2, nonrandomized trial to assess the efficacy of azacitidine-venetoclax as a bridge to HSCT in NPM1 molecular failure is currently active ( www.clinicaltrials.gov identifier #NCT04867928).
Key Points • 56 out of 79 (84%) patients treated with VEN-combinations for NPM1 molecular failure achieved an MRD response, and 71% became MRD negative. • Venetoclax combinations are a potentially effective treatment for molecular failure, either as a bridge to transplant or as definitive therapy. Visual Abstract Abstract Molecular failure in NPM1 -mutated acute myeloid leukemia (AML) inevitably progresses to frank relapse if untreated. Recently published small case series show that venetoclax combined with low-dose cytarabine or azacitidine can reduce or eliminate measurable residual disease (MRD). Here, we report on an international multicenter cohort of 79 patients treated for molecular failure with venetoclax combinations and report an overall molecular response (≥1-log reduction in MRD) in 66 patients (84%) and MRD negativity in 56 (71%). Eighteen of 79 patients (23%) required hospitalization, and no deaths were reported during treatment. Forty-one patients were bridged to allogeneic transplant with no further therapy, and 25 of 41 were MRD negative assessed by reverse transcription quantitative polymerase chain reaction before transplant. Overall survival (OS) for the whole cohort at 2 years was 67%, event-free survival (EFS) was 45%, and in responding patients, there was no difference in survival in those who received a transplant using time-dependent analysis. Presence of FLT3 -ITD mutation was associated with a lower response rate (64 vs 91%; P < .01), worse OS (hazard ratio [HR], 2.50; 95% confidence interval [CI], 1.06-5.86; P = .036), and EFS (HR, 1.87; 95% CI, 1.06-3.28; P = .03). Eighteen of 35 patients who did not undergo transplant became MRD negative and stopped treatment after a median of 10 months, with 2-year molecular relapse free survival of 62% from the end of treatment. Venetoclax–based low intensive chemotherapy is a potentially effective treatment for molecular relapse in NPM 1-mutated AML, either as a bridge to transplant or as definitive therapy.
Conflict-of-interest disclosure: A.M.-R. reports consultancy or advisory role in Bristol Myers Squibb (BMS), AbbVie, and Kite Gilead; travel grants from Kite Gilead , 10.13039/100004337 Roche , Takeda , 10.13039/100005565 Janssen , and 10.13039/100006483 AbbVie ; and speaker fees from AbbVie and Gilead. A.H.W. has served on advisory boards for Novartis, AstraZeneca, Astellas, Janssen, Jazz, Amgen, Roche, Pfizer, AbbVie, Servier, Gilead, BMS, and BeiGene; has consulted for AbbVie, Servier, Novartis, Shoreline, and Aculeus; receives research funding to the institution from 10.13039/100004336 Novartis , 10.13039/100006483 AbbVie , 10.13039/501100011725 Servier , BMS , 10.13039/100018201 Syndax , 10.13039/100013870 Astex , 10.13039/100004325 AstraZeneca , and 10.13039/100002429 Amgen ; serves on speaker’s bureaus for AbbVie, Novartis, BMS, Servier, and Astellas; is an employee of the Walter and Eliza Hall Institute (WEHI), and WEHI receives milestone and royalty payments related to the development of venetoclax; current and past employees of WEHI may be eligible for financial benefits related to these payments, and A.H.W. receives such a financial benefit. M. Jädersten has received institutional support from 10.13039/100006483 AbbVie for arranging educational webinars and has served on advisory boards for AbbVie. S.K. has served on advisory boards for Astellas, Jazz, AbbVie, Servier, and Novartis; speaker’s bureau of Astellas, Jazz, and Novartis; and research funding from 10.13039/100004336 Novartis . D.T.K. received consulting/advisory fees from AbbVie, Atheneum, and Astellas Pharma. V.M. has provided consultancy and received speaker honorarium from AbbVie, Jazz, Novartis, and Pfizer, and educational grants from Astellas and Takeda . N.O. has served on advisory boards for Takeda and Jazz; has consulted for AbbVie, Astellas, BMS, and Servier; and has received support for conference registration/accommodation/travel costs from AbbVie, Jazz, Pfizer, Servier, and Takeda. A.S.R. has provided consultancy to AbbVie and received travel grants from 10.13039/100011096 Jazz Pharmaceuticals . The remaining authors declare no competing financial interests.
Supplementary Material Acknowledgments This study was supported by fellowship grants from the 10.13039/501100003463 Haematology Society of Australia and New Zealand and the RCPA Foundation (J.O.), and laboratory funding from 10.13039/501100015570 Blood Cancer UK , 10.13039/501100000289 Cancer Research UK , and the 10.13039/501100000272 National Institute for Health Research (R.D.). Authorship Contribution: R.D. and N.R. conceived the study; C.J.-C., J.O., and R.D. wrote the manuscript; C.J.-C. and J.O. performed statistical analyses; and all authors contributed to the manuscript and interpretation of data, approving the final version of the manuscript.
CC BY
no
2024-01-16 23:45:28
Blood Adv. 2023 Dec 2; 8(2):343-352
oa_package/b6/f7/PMC10788851.tar.gz
PMC10788855
0
Introduction Carbohydrates constitute the most structurally diverse class of natural products and can serve many functions in cells and organisms. 1 Glycans refer to carbohydrate chains that can be free or attached to proteins or lipids to form simple or complex glycoconjugates. 2 Glycans participate in almost every biological process. 3 In addition to forming important structural features, the glycans of glycoconjugates modulate or mediate a wide variety of functions, such as cell adhesion, recognition, receptor activation, or signal transduction in animal and plant cells. 4 Bacteria can synthesize a diverse array of glycans, being found attached to proteins and lipids, or as loosely associated polysaccharides to the cells. 1 The precise role of these glycans in bacterial symbiosis and cell–cell and cell–environment interactions is just beginning to be understood. Most of bacterial glycans are located at the surface of cells, deposited in the extracellular space and attached to soluble signaling molecules. 1 In this respect, when biofilm is formed, as the extracellular polymeric substances (EPS) are the components that form the matrix wherein the microorganisms are embedded, bacterial glycans are one of the important components of the EPS. However, EPS are frequently reported consisting of proteins (structural proteins and enzymes), polysaccharides, nucleic acids, and lipids, 5 which overlooks the possibility that proteins and polysaccharides and lipids and polysaccharides in EPS may present not only as separate components but also in various forms of glycoconjugates. 6 Moreover, the frequently used EPS characterization methods (e.g., colorimetric methods) only allow for characterization of the separate classes of molecules but provide little insight into the glycoconjugates. At present, one of the proven effective methods for EPS glycoconjugate analysis is fluorescence lectin bar-coding (FLBC). 7 These lectins can bind to specific carbohydrate regions, allowing for the screening of glycoconjugates in a hydrated biofilm matrix. This method has been successfully applied to the analysis of a few different types of biomass, such as saline aerobic granular sludge, anaerobic granular sludge, anammox granular sludge, and “ Candidatus Accumulibacter phosphatis” enrichment. 8 − 10 Glycans, with sugar residues including sialic acids, mannose, galactose, N -acetyl-galactosamine, and N -acetyl-glucosamine, were found in the EPS of those biomasses. 11 , 12 It is worth pointing out that information provided by this method only reflects the composition of the carbohydrates; it is still unclear whether these carbohydrates are attached to proteins, lipids, or simply as polysaccharides. Hence, to unravel the complete glycan profile of the EPS in biofilms, it is significantly important to establish methodologies to identify glycoconjugates such as glycoproteins and glycolipids. Protein glycosylation is the covalent attachment of single sugars or glycans to select residues of proteins. It is one of the common yet most complex post-translational modifications. Protein glycosylation has profound effects on protein function and stability. 13 Historically, glycosylation of proteins was used to be considered to occur exclusively in eukaryotes; only recently it is accepted that prokaryotes can also perform (complex) protein glycosylation. 14 The glycosylation of prokaryotic proteins is far less studied, and most of the research focuses on specific pathogenic bacteria. Regarding a few studies on the glycoproteins in environmental samples, such as the glycoproteins in the EPS of anammox granular sludge, mass spectrometry was performed. 15 While this approach enables deciphering the structure of glycans derived from glycoproteins, it is not amenable to adaptation to a high-throughput platform. 16 This brings a severe bottleneck in monitoring the diversity and dynamic alteration of the glycan profile. Especially, given that such diverse structures are important interfaces between bacteria and the environment. Thus, the major challenge in glycan research in the environmental field lies in developing high-throughput and comprehensive characterization methodologies to elucidate the structure and monitor the change of glycosylation. To this end, in the current research, the dynamic change of the glycan profile of a few EPS samples was monitored by Gas chromatography–mass spectrometry (GC-MS) and high-throughput lectin microarray as well as the sialylation and sulfation analysis. Those EPS samples were extracted from aerobic granular sludge collected at different stages during its adaptation to seawater conditions. The information generated sheds light on the approaches to identify and monitor the diversity and dynamic alteration of the glycan profile of the biomass in response to environmental stimuli.
Experimental Methods Reactor Operation Seawater-adapted aerobic granular sludge was cultivated in a 2.8 L bubble column (6.5 cm diameter) as a sequencing batch reactor (SBR) adapted from de Graaf et al. 12 The reactor was inoculated with aerobic granular sludge cultivated in a lab-scale reactor with glycerol as the carbon source under freshwater condition. 17 The temperature was controlled at 20 °C, and the pH was controlled at pH 7.3 ± 0.1 by dosing 1.0 M NaOH or 1.0 M HCl. The DO was controlled at 2 mg of O 2 /mL (80% saturation). Reactor cycles consisted of 60 min of anaerobic feeding, 170 min of aeration, 5 min of settling, and 5 min of effluent withdrawal. Artificial seawater was gradually introduced for 13 days until a concentration of 35 g/L was reached. To investigate the glycan profile of the extracellular polymeric substances of aerobic granules during their adaptation to seawater, granules were collected at three different time slots: t0, t18, and t30. The sample at t0 refers to the inoculum. The sample at t18 was collected 18 days after the reactor started (5 days after the seawater concentration in the reactor achieved 35 g/L; the SRT in the reactor was not controlled). The sample at t30 was taken 30 days after the reactor start (the SRT in the reactor was controlled as 13.6 days), representing a relatively stable state of seawater-adapted granules. The organic and ash fractions of the biomass were determined according to the standard methods after washing the granules three times with demi-water. 18 For EPS extraction and characterization, the granules were lyophilized immediately and stored at room temperature. Microbial Community Analysis by Fluorescent In Situ Hybridization (FISH) To investigate the microbial community, fluorescent in situ hybridization (FISH) was performed. The handling, fixation, and staining of samples were performed as described in Bassin et al. 19 A mixture of EUB338, 13 EUB338-II, and EUB338-III probes were used to stain all of the bacteria. 20 A mixture of PAO462, PAO651, and PAO846 probes (PAOmix) was used for visualizing polyphosphate accumulating organisms (PAOs). 21 A mixture of GAOQ431 and GAOQ989 probes (GAOmix) was used to target glycogen accumulating organisms (GAOs). 21 The samples were examined with a Zeiss Axioplan 2 epifluorescence microscope equipped with filter sets 26 (bp 575e625/FT645/bp 660e710), 20 (bp 546/12/FT560/bp 575e640), and 17 (bp 485/20/FT 510/bp 5515e565) for Cy5, Cy3, and fluos, respectively. EPS Extraction from Aerobic Granular Sludge Lyophilized granules were extracted in 0.1 M NaOH (1% VS w/v) for 30 min at 80 °C while being stirred at 400 rpm. The solution was cooled and centrifuged at 4000 g for 20 min at 4 °C. The supernatant was collected and subsequently dialyzed against demi-water overnight in dialysis tubing with a molecular weight cutoff of 3.5 kDa MWCO (Snakeskin, ThermoFisher Scientific, Landsmeer). The dialyzed EPS solution was lyophilized and stored at room temperature until further analysis. EPS Characterization Glycosyl Composition Analysis by TMS Method Glycosyl composition analysis of the extracted EPS was performed at the Complex Carbohydrate Research Center (CCRC, University of Georgia) by combined GC/MS of the O-trimethylsilyl (TMS) derivatives of the monosaccharide methyl glycosides produced from the sample by acidic methanolysis. These procedures were carried out as previously described in Santander et al. 22 In brief, lyophilized EPS aliquots of 300 μg were added to separate tubes with 20 μg of inositol as the internal standard. Methyl glycosides were then prepared from the dry sample following the mild acid treatment by methanolysis in 1 M HCl in methanol at 80 °C (16 h). The samples were re-N-acetylated with 10 drops of methanol, 5 drops of pyridine, and 5 drops of acetic anhydride and were kept at room temperature for 30 min (for detection of amino sugars). The sample was then per-o-trimethylsilylated by treatment with Tri-Sil (Pierce) at 80 °C (30 min). These procedures were carried out as described by Merkle & Poppe. 23 GC/MS analysis of the per-o-trimethylsilyl methyl glycosides was performed on an AT 7890A gas chromatograph interfaced to a 5975B MSD mass spectrometer, using a Supelco EC-1 fused silica capillary column (30 m × 0.25 mm ID) and the temperature gradient shown in Table 1 . Sulfated Glycosaminoglycan Assay Detection and quantification of sulfated glycosaminoglycans (sulfated GAGs) in the extracted EPS were performed with the Blyscan sulfated glycosaminoglycan assay (Biocolor, Carrickfergus, UK), according to the manufacturer’s instructions. Samples (2–5 mg) were digested with 1 mL of papain protein digestion solution at 65 °C for 3 h at 300 rpm (Sigma-Aldrich, Zwijndrecht, Netherlands). The supernatant was recovered after centrifugation at 10,000 g for 10 min. 50 μL of sample was then added to 1 mL of 1,9-dimethyl-methylene blue (DMMB) dye reagent. Sulfated GAGs positive components bind and precipitate with the dye and are subsequently isolated and resolubilized. The concentration of sulfated GAGs was measured with a multimode plate reader at 656 nm (TECAN Infinite M200 PRO, Switzerland) as chondroitin sulfate equivalents. Lastly, the distribution of N-linked and O-linked sulfates in the samples was measured by performing nitrous acid cleavage according to the manufacturer’s instructions prior to sulfated GAGs quantification. Nonulosonic Acid Analysis with Mass Spectroscopy Detection of nonulosonic acids (NulOs) in the extracted EPS was done according to the approach described by Kleikamp et al. (2020). In short, lyophilized EPS fractions were hydrolyzed by 2 M acetic acid for 2 h at 80 °C and dried with a Speed Vac concentrator. The released NulOs were labeled using DMB (1,2-diamino-4,5-methylenedioxybenzene dihydrochloride) for 2.5 h at 55 °C and analyzed by reverse phase chromatography Orbitrap mass spectrometry (QE plus Orbitrap, ThermoFisher Scientific, Bleiswijk, Netherlands). Glycan Profiling of Glycoproteins by Lectin Microarray Analysis High-density lectin microarray was generated according to the method described. 24 0.4 μg of EPS was labeled with Cy3-N-hydroxysuccinimide ester (GE Healthcare), and excess Cy3 was removed with Sephadex G-25 desalting columns (GE Healthcare). Cy3-labeled proteins were diluted with probing buffer [25 mM tris-HCl (pH 7.5), 140 mM NaCl, 2.7 mM KCl, 1 mM CaCl 2 , 1 mM MnCl 2 , and 1% Triton X-100] to 0.5 μg/mL and were incubated with the lectin microarray at 20 °C overnight. The lectin microarray was washed three times with probing buffer, and fluorescence images were captured using a Bio-Rex scan 200 evanescent-field-activated fluorescence scanner (Rexxam Co. Ltd., Kagawa, Japan). The obtained signals were mean-normalized, and ANOVA test was performed using IBM SPSS Statistics 24.0 to identify lectins with significantly different intensities between the three samples. Heatmap of the lectins with significant intensities ( p < 0.05) was performed using the Rpackage Pheatmap (version 1.0.12) on RStudio (version 4.2.2). Student′s t test was performed using IBM SPSS for statistical analysis between EPSt18 and EPSt30 to obtain the t-value.
Results Reactor Operation and Microbial Community in Seawater-Adapted Aerobic Granular Sludge An aerobic granular sludge reactor was inoculated with granular sludge from the other lab reactor (with glycerol as a carbon source (t0)). Acetate was used as a carbon source to enrich specifically for phosphate accumulating organisms (PAOs). 12 The salinity in the reactor was stepwise increased until 35 g/L of seawater was reached. After 7 days, complete acetate and phosphate removal were observed. Granular sludge samples were collected on the 18th and 30th days after the start of the reactor. The typical reactor profiles of t0, t18, and t30 show similar trends in acetate uptake and phosphate removal ( Figure 1 ). During the anaerobic phase, acetate was taken up and a phosphate release was found to be up to 2.72 Pmmol/L. The reactor’s biomass concentration was roughly constant at around 7 g VSS/L with a VSS/TSS of around 76%. The morphology of the granules is shown in Figure 1 . No visual differences were observed among the three samples. According to the FISH analysis, PAO was the dominant microorganism in the three granule samples ( Figure 2 ). While the abundance of glycogen accumulating organisms (GAOs) was much lower than that of PAO. Comparatively, the abundance of GAO in granules collected at t18 ( Figure 2 A) seemed relatively higher than that in granules collected at t0 and t30 ( Figure 2 B,C). It was also observed that the size of the microcolony of PAO was much bigger in granules at t30 than in granules at t0 and t18. EPS Extraction and Characterization The extracted EPS has the same yellow color as the aerobic granules. The yield of EPS at t0, t18, and t30 was 308 ± 117, 385 ± 82 mg/g, and 640 ± 42 (VSS ratio), with VS/TS ratios of 69, 70, and 86%, respectively. Apparently, during the adaptation to seawater conditions, more EPS, which can be extracted with NaOH, was produced. Glycosyl Composition The glycosyl composition of the extracted EPS is listed in Table 2 , and the GC-MS chromatogram is included in the Supporting Information . The total carbohydrate amount increased from EPS t0 to EPS t30 ( Table 1 ). Glucose, rhamnose, mannose, fucose, and galactose were found to be the main components of all samples. The relative molar ratio of each sugar monomer varied among samples, with glucose as the most abundant monomer. Xylose and N- acetyl glucosamine were also found in the seawater-cultured samples, while only the inoculum contained arabinose. Additionally, an unknown sugar was detected in all of the samples at about 29.3 min (marked by asterisk* in GC spectrum in the Supporting Information ). Thus, based on sugar composition, there is a clear difference between the inoculum and seawater-grown granular sludge EPS. NulOs and Sulfated Glycosaminoglycan-like Polymers Glycoconjugate modifications with acidic groups such as sulfate (sulfation) and/or sialic acid (sialylation) on the glycans are common phenomena in the extracellular matrix of eukaryotes. Recently, these two glycoconjugate modifications (sulfation and sialylation) were found to be widely distributed in the EPS of granular sludge as well. 26 In order to investigate the influence of seawater conditions on sulfation and sialylation, the same analysis was performed on the extracted EPS samples. To identify which kinds of nonulosonic acids (NulOs, sialic acids is one type of nonulosonic acids) are present in the granules, mass spectrometry (MS) was applied. NulOs were detected in the form of N- acetyl neuraminic acid (NeuAc) and pseudaminic acid/legionaminic acid (Pse/Leg, which are also referred to as bacterial sialic acids in the literature. These two monomers have the same molecular weight and cannot be differentiated by MS). Hence, there are two different kinds of NulOs in all of the EPS samples. These NulOs could be part of glycoconjugates, including glycolipids, glycoproteins, and capsular polysaccharides. The presence of sulfated GAGs was investigated by using the DMMB assay. The following sulfated GAGs, either still attached to the peptide/protein core or as free chains, can be assayed: chondroitin sulfates (4- and 6-sulfated), keratan sulfates (alkali sensitive and resistant forms), dermatan sulfate, and heparan sulfates (including heparins). The total content of sulfated GAGs measured in EPS t0 , EPS t18 , and EPS t30 , was 20.3 ± 0.3, 16.6 ± 0.1, and 25.3 ± 0.2 mg/g, respectively. It seemed that during adaptation to the seawater condition, the amount of these polymers in the EPS was increased. In addition, the percentage of N-sulfated GAGs in the respective EPS increased during adaptation, with the highest percentage in EPS t30 ( Figure 3 ). In comparison to the aerobic granular sludge EPS reported by ref ( 25 ), the total sulfated GAG content in the EPS of the seawater-adapted granules is much lower, mainly half of the reported amount. Likely, the differences in the operational conditions and microbial communities are the causes. Lectin Microarray To evaluate protein glycosylation and monitor the dynamic glycan profile of those glycoproteins, a lectin microarray has been used. It is based on the mechanism that lectins selectively bind with glycans by recognizing their specific patterns. It is worth noting that, in this analysis, proteins in the extracted EPS were fluorescently labeled with Cy3. If the labeled proteins are glycosylated and their distinct glycan structures match with the affinity of the lectins, they will bind with the lectins on the microarray and their fluorescent signal will be recorded by the evanescent-field fluorescence scanner. Thus, a strong fluorescent signal indicates the following: the bound proteins are glycoproteins; the glycan part of the bound protein has the same glycan profile pattern that the lectin can recognize, and the amount of this glycoprotein is high. It was found that for all of the EPS samples, within the 97 lectins used in the lectin microarray, 65 gave a strong fluorescent binding signal. This clearly indicates that there are glycoproteins in all of the EPS samples since only glycoproteins can be detected by the microarray. In addition, from the specificity of the lectins, information on the glycan pattern can be obtained. The result of the lectin microarray showed that there were glycoproteins with N-linked glycosylation (e.g., due to the binding of lectins TxLcl, rXCL, CCA, and rSRL) and O-linked glycosylation (e.g., due to the binding of lectins HEA, MPA, VVA, and SBA). Those glycoproteins contained one or multiple glycans, such as sialic acids (with both α2–3 and α2–6 linkages), lactosamine and/or polylactosamine, mannose (including α1–3 and α1–6 linkages), fucose (including α1–2, α1–3, and α1–4 linkages), N- acetyl glucosamine, and galactose (with and without sulfation) (for details of the lectins, refer to the Supporting Information ). Interestingly, 55 lectins were found to be significantly different between the three EPS samples, indicating that the glycan profile of the glycoproteins is altered with the change of the environmental conditions (implied by the color change in Figure 4 from blue to red). If the two EPS extracted from seawater-adapted granules are compared, Figure 5 clearly shows that most of the glycan signals are increased in EPS t18 , meaning that there are more glycosylated proteins in the EPS t18 . In addition, as each lectin has its binding specificity, this also shows that the glycan profile of EPS t18 has extremely strong diversity, while EPS t30 has less glycan diversity. It suggests that, in response to exposure to seawater, the amount of glycoproteins and their glycan diversity first increases; once the granules reach a stable state of adaptation, both the amount of glycoproteins and their glycan diversity tend to decrease. Such a change may also be related to the shift of microbial community; as seen in Figure 2 , at t18, the microbial community was more diverse with the presence of PAO, GAO, and other eubacteria; while at t30, PAO was fully dominating over GAO and other eubacteria.
Discussion In Response to the Exposure to Seawater, the Glycan Profile, Especially That of the Glycoproteins in the EPS of Aerobic Granular Sludge, Varied Significantly During the adaptation to seawater, EPS from aerobic granular sludge exhibited the following variation: there was more EPS, which can be extracted under alkaline conditions (with NaOH present). The yield of the EPS on day 30 was about 2 times that of the inoculum. This is in line with the reported finding that the adaptation of aerobic granular sludge to high saline conditions led to extra EPS production. 27 Within the EPS, the percentage of glycans detected by GC-MS was increased, as well. The amount of glycans was tripled on day 30. It is known that bacterial glycans can act as osmoprotection and desiccation protection factors against the salt. 28 Producing a higher amount of glycans in the EPS might be used by the microorganisms as a strategy to protect themselves from harsh environmental stress factors such as the high salt content in seawater. Looking at the glycosyl composition of the glycans, during the adaptation to seawater condition, xylose and N -acetyl glucosamine appeared, while arabinose disappeared from the sugar monomers. This indicates that after being exposed to seawater, there is a significant change in glycan composition produced in the EPS. The role of these two sugar monomers against seawater conditions is unknown and needs further investigation. It is also noticed that, in the three EPS samples, the amount of glucose is extremely high in comparison to that of all of the other monomers. The possible explanation could be that there might be glucose-rich glycans, such as β glucan or lipopolysaccharides produced as part of the EPS. 29 Further investigation is needed to understand the high glucose content. Within the glycans, besides free polysaccharides, there are glycoconjugates, such as glycoproteins and glycolipids. To further investigate the potential existence of glycoproteins and their glycan profile, lectin microarray analysis was performed. The existence of glycoproteins with diverse glycosylation patterns was observed for all EPS samples, strongly confirming that protein glycosylation is indeed common in aerobic granular sludge. Interestingly, there were more glycoproteins in EPS t18 than EPS t0 and EPS t30 , and the glycosylation pattern of EPS t18 is significantly diverse. This indicates that, in response to the environmental change, i.e., exposure to the increased salt condition, one of the adaptation strategies of the microorganisms can be altering the glycosylation of proteins in quantity and diversity. Once the steady state of adaptation was reached, the diversity of protein glycosylation and the amount of glycoproteins reduced. In fact, similar phenomena were reported in anaerobic granular sludge: a significant shift in the glycoconjugate pattern in anaerobic granular sludge happened with increasing salinity. 30 Therefore, it seems that not only the total glycome profile of the EPS but also the glycan profile of glycoproteins are dynamic and sensitive to environmental stimuli such as salinity. It Is Important to Investigate the Glycan Profile of Glycoproteins in Aerobic Granular Sludge The glycome is defined as the entire complement of glycan structures (including glycoproteins/glycolipids and free polysaccharides) produced by cells. 31 Unlike DNA replication, RNA transcription, or protein translation, glycan biosynthesis is not directed by a pre-existing template molecule. 32 Instead, the glycome depends on the interplay among the glycan biosynthetic machinery, the available nucleotide sugars (serving as monosaccharide donors), and signals from the intracellular and extracellular environments. Thus, the glycome composition is dynamic and is influenced by both genetic and environmental factors. 33 In granular sludge, the EPS is produced by the microorganisms and is involved in bacterial cells’ interactions with their environment. As the extracellular environmental condition is one of the factors that influence the glycome, a change in the environmental condition must have its own reflection in the glycan profile. As demonstrated in the current research, the glycan profile, especially the glycoproteins in the EPS, is sensitive to environmental stimuli. Due to the fact that protein glycosylation is an important post-translational modification, small changes in the glycans of glycoproteins can have profound consequences for protein function. 32 Such sensitivity and dynamic alteration of the glycan profile in the EPS may influence the chemical and physical structures and properties of the EPS and, furthermore, the stability of granular sludge. Further research is needed to find the correlation among the glycan profile dynamics, the property alteration of EPS, and the activities of the microbial community. Lectin Microarray Can Be Used as a High-Throughput Approach to Monitor the Diversity and Dynamic Change of the Glycoproteins in the Environmental Sample Given the profound impact of glycans on the function of glycoproteins, protein glycosylation might play an important role in the EPS of biofilm. However, protein glycosylation in the EPS remains largely uncharacterized, and the existence of glycoconjugates such as glycoproteins (and glycolipids) in the EPS was very recently reported and started getting attention. 33 On the other hand, the complexity of glycosylation poses an analytical challenge. Current methods for bacterial glycan analysis include MS, HPLC, and HPAEC-PAD. These methods require the release of glycans from a glycoprotein through enzymatic or chemical reactions. This makes an accurate assessment of glycosylation depend on a complete release of all of the glycans that are present in a glycoprotein. In this respect, a significant investment of time and effort is needed, which becomes one of the bottlenecks for a high-throughput study of the diversity and dynamic change of the glycan profile. Recently, using a lectin microarray as a high-throughput approach has attracted great interest. Importantly, the lectin microarray directly measures glycan profiles on an intact protein without the need for enzymatic digestion or clipping glycans from the protein backbone. Such a platform is unique in increasing the possibility of full coverage of all glycan variants of glycoproteins. 34 In the current work, the application of the lectin microarray indeed confirmed the presence of glycoproteins and effectively monitored its alteration along the adaptation to the seawater condition. Additionally, the result of lectin microarray is in line with the result of other analyses performed: i.e., sugar monomers such as mannose, fucose, galactose, and N- acetyl glucosamine were detected by the glycosyl composition analysis through GC-MS. The sialic acids captured by the MS and sulfated glycosaminoglycan-like polymers revealed by the DMMB assay were in line with the presence of sialic acids, lactosamine, and galactose with sulfation (e.g., keratan sulfate) detected by the lectin microarray analysis. This suggests that the lectin microarray is a successful platform for glycan profiling of glycoproteins in microbial aggregates such as granular sludge. Despite the success, it is worth noting that as lectins are of diverse specificity, some have cross-reactivity with various glycans. It is relatively difficult to characterize a specific glycan using only one lectin. A second limitation is the lack of availability of lectins that recognize sugars unique to bacteria. Designing a bacteria (or biofilm)-specific lectin microarray is an interesting topic for future research.
Conclusions Protein glycosylation was identified in the extracellular polymeric substances (EPS) in aerobic granular sludge. In response to environmental stimuli such as exposure to seawater, the glycan profile, especially that of the glycoproteins, varied significantly: xylose and N -acetyl glucosamine appeared as sugar monomers in comparison to the inoculation. The amount of glycoproteins and their glycan diversity displayed an increase during adaptation, followed by a decrease once the granules reached a stable state of adaptation. Lectin microarray can be used as a high-throughput approach to monitor the diversity and dynamic change of glycans in the glycoproteins in the EPS of aerobic granular sludge.
Bacteria can synthesize a diverse array of glycans, being found attached to proteins and lipids or as loosely associated polysaccharides to the cells. The major challenge in glycan analysis in environmental samples lies in developing high-throughput and comprehensive characterization methodologies to elucidate the structure and monitor the change of the glycan profile, especially in protein glycosylation. To this end, in the current research, the dynamic change of the glycan profile of a few extracellular polymeric substance (EPS) samples was investigated by high-throughput lectin microarray and mass spectrometry, as well as sialylation and sulfation analysis. Those EPS were extracted from aerobic granular sludge collected at different stages during its adaptation to the seawater condition. It was found that there were glycoproteins in all of the EPS samples. In response to the exposure to seawater, the amount of glycoproteins and their glycan diversity displayed an increase during adaptation, followed by a decrease once the granules reached a stable state of adaptation. Information generated sheds light on the approaches to identify and monitor the diversity and dynamic alteration of the glycan profile of the EPS in response to environmental stimuli. This study provides insight into the approaches to identify and monitor the diversity and dynamic alteration of the glycan profile of the extracellular polymeric substances in response to environmental stimuli.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsestwater.3c00625 . The GC chromatogram of EPS samples and the results of lectin microarray ( PDF ) Supplementary Material Author Contributions CRediT: Le Min Chen data curation, writing-original draft; Sunanda Keisham data curation, formal analysis, writing-review & editing; Hiroaki Tateno investigation, methodology, writing-review & editing; Jitske van Ede data curation, formal analysis, methodology; Mario Pronk methodology, writing-review & editing; Mark C.M. Van Loosdrecht funding acquisition, supervision; Yuemei Lin investigation, supervision, writing-original draft, writing-review & editing. The authors declare no competing financial interest. Acknowledgments This research was financially supported by the SIAM Gravitation Grant 024.002.002 from The Netherlands Organization for Scientific Research, TKI Chemie 2017 (cofunded by Royal Haskoning DHV), from the Dutch Ministry of Economic Affairs and Climate Policy and the Novo Nordisk Foundation (REThiNk, grant NNF22OC0071498). The glycosyl composition analysis performed at CCRC, the University of Georgia, was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences and Biosciences Division, under award #DE-SC0015662.
CC BY
no
2024-01-16 23:45:29
ACS ES T Water. 2023 Dec 27; 4(1):279-286
oa_package/ac/26/PMC10788855.tar.gz
PMC10788856
38113038
Methods Genome Mining of Streptomyces sp. H-KF8 The BGCs of Streptomyces sp. H-KF8 were identified through the online platform AntiSMASH (version 6.0), 25 using the genome. 21 Genetic determinants and peptidic prediction were manually validated. Peptide Synthesis All peptides were synthesized by automated Fmoc-based SPPS on an INTAVIS MultiPep synthesizer. Peptides were synthesized on Rink amide resin with a loading capacity of 0.54 mmol/g. All amino acid derivatives were prepared as stock solutions at a concentration of 0.5 M. All derivatives were dissolved in DMF and mixed vigorously in a vortex until complete solubilization. Hexafluorophosphate benzotriazole tetramethyl uronium (HBTU) was used as the coupling reagent and prepared as 0.5 M stock solution (5 equivalents relative to loading capacity) in DMF, and NMM (two-fold excess to amino acids and coupling reagent, 10 equivalents relative to loading capacity) stock solution in DMF was used as a base. Fmoc-deprotection was performed with 20% piperidine in DMF. Coupling of each amino acid was performed twice each time for 20 min, as well as Fmoc-deprotection each time for 10 min. NMP was used as a cosolvent during coupling. After every coupling, DMF washing was performed. Peptide cleavage and deprotection were accomplished in a mixture of 92.5% TFA, 5% water, and 2.5% TIPS. Synthesis of Cyclic Peptides All cyclic peptides were synthesized by on-resin cyclization. The peptides were first synthesized as linear peptides and, after completion, subjected to an on-resin cyclization procedure directly after the last coupled amino acid. The procedure started with resin washing with N -methyl-2-pyrrolidone (NMP), followed by anhydride coupling to the N -terminal amino group. For the latter, a mixture of glutaric acid anhydride:DMAP:DIEA (10:1:10) in NMP (0.2 mL per eq.) was incubated for 1 h at 75 °C. After that resin was washed intensively three times each with NMP followed by washing three times with DCM. Then, monomethoxytrityl (Mmt) was cleaved from the lysine side chain selectively with the mixture containing 1%TFA, 5%TIPS, and 94% DCM for 5 min twice by discarding cleavage solution each time. The resin was finally washed intensively with DCM for five times. The head-to-tail amide bond cyclization was performed by using different coupling reagents C1 (5 eq. pyBOP and 10 eq. DIEA), C2 (5 eq. HATU and 10 eq. NMM), and C3 (1.5 eq. pyBOP and 3.3 eq. TMP). The coupling was performed twice for 2 h at 75 °C. After the cyclization thorough washing with DMF and DCM, the resin was dried before cleavage. High-Performance Liquid Chromatography (RP-HPLC) All crude and purified peptides were analyzed by analytical RP-HPLC on a Waters e2695 Alliance system (Waters, Milford, MA, USA) employing a Waters 2998 photo diode array (PDA) detector equipped with an ISAspher Xela 100-1.7 C18 column (50 × 2.1 mm). HPLC eluent A was water (0.1% trifluoroacetic acid (TFA)) and eluent B was acetonitrile (0.1% TFA) (detection at 214 nm). Preparative scale purification of the peptides was achieved employing a Waters 1525 binary gradient pump and a Waters 2998 PDA detector or a customized Waters 600 module equipped with a Waters 996 PDA detector (Waters). HPLC eluent A was water (0.1% TFA), and eluent B was acetonitrile (0.1% TFA). Mass Spectrometry Bioactive fraction obtained from crude extracts of Streptomyces sp. H-KF8 was identified with tests against Staphylococcus aureus ATCC 29740T, Staphylococcus epidermidis ATCC 35984T, Escherichia coli ATCC 8739T, Listeria monocytogenes ATCC 19114T, Pseudomonas aeruginosa ATCC 27853T, Klebsiella pneumoniae ATCC 13883T, Enterococcus faecalis ATCC 19433T, Micrococcus luteus ATCC 9341T, and Bacillus subtilis ATCC 1668T, and bioactive fractions were subjected to ESI-FT ICR MS analysis. For initial prediction of the consensus sequence from Streptomyces sp. H-KF8, MALDI-TOF MS/MS analysis was performed by Bruker Ultraflex Extreme spectrometer, samples were prepared using α-cyano-4-hydroxycinnamic acid and measured in positive mode. The parent peak at m / z 1448.752 was selected for MS/MS analysis ( Figures S1 and S2 ). The molecular weight of the purified compounds was confirmed by ESI mass spectrometry on a Waters SYNAPT G2-Si HD-MS spectrometer equipped with a Waters Acquity UPLC system (Waters). Leu-enkephalin was used as a reference compound for high-resolution measurements. Amino Acid Analysis To determine the concentration (calculated as active peptide content according to a given sample mass), amino acid analysis was performed as previously reported. 42 ( Table S3 and Figure S11 ). NMR Solution NMR experiments were performed at 280 K on a Bruker Avance III HD 900 MHz spectrometer. The peptide sample was dissolved in 90% H 2 O/10% D 2 O using the freeze-dried solid compound. Data were acquired and processed with Topspin 4.1.3 (Bruker, Rheinstetten, Germany) and analyzed with CCPN. 43 The proton resonance assignment was performed using a combination of 2D [1H,1H]-TOCSY (80 ms spinlock time) and [1H,1H]-NOESY and/or [1H,1H]-ROESY experiments. Distance constraints were extracted from [1H,1H]-NOESY and [1H,1H]-ROESY spectra acquired with 200–300 ms mixing time. The upper limit distance constraints were calibrated according to their intensity in the NOESY/ROESY spectrum. Torsion angle constraints were obtained from proton chemical shift analysis using DANGLE 44 and adapted accordingly to d - and l -amino acids. 45 Structure calculations were performed with the YASARA structure ( Table S5 ). 46 − 48 Structures were refined in water at pH 4. CD Spectroscopy Circular dichroism (CD) spectra were recorded on a Chirascan Applied Photophysics in a cuvette with a length of 1 cm between 190 and 280 nm and a bandwidth of 1.0 nm with a step size of 1 nm and a response time of 7.6 s per point. CD spectra were recorded as absorbance in mdeg at a temperature of 23 °C. The background was measured and subtracted from the CD spectra from the samples by measuring a blank sample with the corresponding solvent. Peptides were dissolved in two different concentrations (10 μM and 50 μM) in water. Minimum Microbicidal Concentration (MMC 99 ) The microbicidal activity of the synthetic peptides was assessed against S. aureus (ATCC 12600; American Type Culture Collection, Manassas, VA, USA), Escherichia coli (CCUG 31246; Culture Collection, University of Gothenburg, Sweden), and Candida albicans (ATCC 64549/CCUG31028) using an MMC assay, as described previously by Haversen et al. (specific details in Supporting Information , pages 10, 11). 28 The assay was performed in BHI diluted 1/100 (BHI 100 ). Two-fold dilution series were performed on all peptides, starting at 100 μg/mL down to 1.6 μg/mL. Fusidic acid, polymyxin B (PolB), and clotrimazole were used for comparison. Strains and Growth Conditions for Mechanism of Action Studies E. coli CCUG31246 and MC4100 (F–-( araD139 ) Δ( argF-lac )169 λ–e14– flhD5301 Δ( fruK-yeiR )725( fruA25 ) relA1 rpsL150 (Strr) rbsR22 Δ( fimB-fimE )632(:IS1) deoC1 , spoT1 ) 49 carrying pABCON2- fhuA ΔC/Δ4L 41 , 50 were aerobically grown at 37 °C in 1:10 diluted BHI. Minimal Inhibitory Concentration MICs were performed under the same growth conditions used for mode of action analysis following the guidelines issued by the Clinical Laboratory Standardization Institute (CLSI) with slight modifications. Serial two-fold dilutions of the peptides were prepared in 1:10 BHI in a sterile 96-well plate and subsequently inoculated with 5 × 10 5 CFU/mL of E. coli CCUG31246. MIC plates were incubated at 37 °C for 16 h under steady agitation. The following MICs were obtained: C3 : 512 μg/mL, L3 : 64 μg/mL, and L3-K : 128 μg/mL. These concentrations were used for all mechanistic experiments. Polymyxin B was used as a positive control in all assays at a final concentration of 10 μg/mL. Peptide Stability in Serum Peptide serum stability was performed according to D́Aloisio et al. 30 and Chen et al. 51 with some minor modifications. The serum stability was investigated using human serum (Sigma-Aldrich) from Human male AB plasma, USA origin, sterile-filtered. 250 μL serum was temperature-equilibrated at 37 °C and 100 μL aqueous peptide solution ( L2 : 1 mM; L3 : 0.9 mM; L3-K : 0.7 mM; and C3 : 0.5 mM) was added. RP-HPLC was measured at time intervals of 0, 0.5, 1, 4, and 24 h. For RP-HPLC samples, 30 μL of peptide solution was taken, 7 μL of Fmoc-Gly solution (4 mM) was added, and 2 μL of the sample was injected to HPLC. Fmoc-Gly was used as an internal standard (RP-HPLC chromatograms are shown in Figure S13 ). Hemolysis The hemolysis assay was performed according to Myhrman et al. 52 with some minor modifications. In short, the hemolytic activities of the peptides L2 , L3 , L3-K , and C3 were determined using fresh human erythrocytes from blood donors. The erythrocytes were separated from blood supplemented with EDTA by centrifugation at 1000 g for 5 min, washed three times with PBS (pH 7.4), and resuspended in PBS to a final red blood cell (RBC) concentration of 2% (v/v). The peptide was serially diluted by two-fold steps in PBS in 80 μL volumes (in triplicates) in a round-bottom 96-well plate (Sarstedt, Numbrecht, Germany, 82.1582001). An equal volume of the RBC suspension was added to the wells, and the plate was incubated for 1 h at 37 °C. After the incubation, 100 μL of the supernatants was carefully removed and transferred to a new microplate. Since L3 , L3-K , and C3 turned yellow in PBS, which increased the absorbance, the pH was adjusted to 5 in all transferred supernatants. At this pH, the yellow color of the peptides disappeared, while the color of hemoglobin of the positive control was unaffected at 490 nm. The release of hemoglobin was analyzed by measuring the absorbance of the supernatants at 490 nm (minus 650 nm) (Tecan). The negative control consisted of PBS instead of peptide, and the positive control consisted of 0.1% Triton X-100 (total hemolysis). The percentage of hemolysis was calculated for all transferred samples using the abs 490–650 of the Triton X-100 containing sample as 100% hemolysis. Cytotoxicity Cytotoxicity against human embryonic kidney (HEK) and hepatoblastoma (HepG2) cell lines was assessed. Cells were grown to an initial seeding density of 10 000 cells per well. Metabolic activity was determined in a resazurin-based assay. HEK and HepG2 cells were exposed to four different concentrations of the peptides for 24 h. Treatment with Triton X-100 at 1% v/v was used as a positive control. Following this exposure period, the cells were treated with resazurin at a final concentration of 0.015 mg/mL for 3 h. The metabolic activity was then determined in an resazurin-based assay. The fluorescence intensities were quantified in a Hidex Sense microplate reader using an excitation wavelength of 544 nm and an emission wavelength of 590 nm. Bacterial Cytological Profiling Bacterial cytological profiling was performed according to Wenzel et al. 34 In short, E. coli CCUG31246 was grown until an OD 600 of 0.3 before antibiotic addition. Samples were taken after 5 and 55 min of antibiotic treatment and subsequently stained with 0.5 μg/mL FM4-64 (Invitrogen) and 1 μg/mL DAPI (Invitrogen) for an additional 5 min. Stained samples were spotted on 1.2% agarose films, sealed with a gene frame, and immediately imaged using a Nikon Eclipse Ti2 inverted fluorescence live-cell imaging system equipped with a CFI Plan Apochromat DM Lambda 100X Oil objective (N.A. = 1.45 and W.D. = 0.13 mm), a Photometrics, PRIME BSI camera, a Lumencor Sola SE II FISH 365 light source, and an Okolab temperature incubation chamber. Images were obtained using the NIS elements AR software version 5.21.03 and were processed and analyzed with ImageJ. 53 Quantification of microscopy images was performed using the ImageJ plugins ObjectJ 54 and MicrobeJ. 55 Cell length was analyzed based on phase contrast images in ObjectJ using default parameters. 54 DNA compaction was analyzed using MicrobeJ 55 based on DAPI and phase contrast images. The parameters for cell and DNA recognition were set to default parameters. The area and width were adjusted to the minimal measured cell length of each sample to ensure detection of all bacterial cells while reducing false-positive detection of debris. Fluorescence intensity parameters remained at default settings. The Z -score was adjusted manually to ensure fitting of DNA detection. DNA compaction values were derived from the quotient of the cell area divided by the DNA area. Propidium Iodide Staining Pore formation was investigated with the fluorescence dye propidium iodide as described previously 56 with minor modifications. E. coli CCUG31246 was grown to an OD 600 of 0.3 and subsequently treated with different peptides for 10 and 60 min. Samples were stained with 1 μg/mL propidium iodide for 15 min (added 5 min before adding antibiotics for the 10 min time point and after 45 min of antibiotic treatment for the 60 min time point), spotted on 1.2% agarose films, and sealed with a gene frame. Microscopy was performed as described above. The fluorescence intensity of the different samples was analyzed with MicrobeJ. For detection of bacterial cells from phase contrast images, parameters were set to an area of 1.5-max, length of 1-max, width 0.5–2.5, curvature 0–1.5, and an angularity of 0–0.5. Fluorescence intensity parameters were set to an area of 1.5-max, length of 1-max, and width 0.5–2.5. All other parameters remained at default settings. DiSC(3)5 Microscopy The membrane potential of E. coli CCUG31246 and E. coli MC4100 carrying pABCON2- fhuA ΔC/Δ4L was measured by DiSC(3)5 microscopy according to ref. ( 40 ) with minor modifications. Cells were grown in 1:10 BHI containing 50 μg/mL bovine serum albumin to an OD 600 of 0.3 prior to antibiotic treatment with the respective peptides for 10 and 60 min. Samples were stained with 0.5 μM DiSC(3)5 for 15 min (added 5 min before adding antibiotics for the 10 min time point and after 45 min of antibiotic treatment for the 60 min time point). Stained samples were spotted on 1.2% agarose, sealed with a gene frame, and imaged immediately. Microscopy was performed as described above, and fluorescence intensity was analyzed with the same parameters used for propidium iodide detection.
Results and Discussion Genome Mining of Streptomyces sp. H-KF8 An updated genome mining analysis was performed using antiSMASH v6.0. 22 and confirmed the low similarity of both NRPS biosynthetic gene clusters mentioned above. 17 Among them, the NRPS BGC #1.8 presented novel genetic features, and therefore, was selected for further analysis ( Figure 2 and Table S1 ). The NRPS #1.8 ( Figure 2 ) is composed of 31 genes arranged in a BGC of 77 237 bp of total length. It harbors two nrps biosynthetic genes, where 10 adenylation domains (A-domain) were detected, nine representing complete modules and one stand-alone domain. A thioesterase domain (TE-domain) was found contiguous to the nrps genes, suggesting a final step of releasing and cyclization of the peptide chain. Genomic prediction suggested that the putative product of this pathway could be a decapeptide with six d -amino acids, due to the presence of six epimerization domains (E-domain) within nrps genes of this BGC ( Figure 2 ). Moreover, the analysis indicates that this BGC has more than one resistance protein. This could indicate that the peptide formed has more than one possible mechanism for inhibition. To confirm the bioinformatic prediction and identify the peptide’s primary structure, LC-MS analysis of the bioactive extract of Streptomyces sp. H-KF8 was conducted ( Figure S1 ). MALDI-TOF MS/MS confirmed the presence of 8 out of the 10 predicted amino acids, leading to the prediction of the following consensus sequence ( Figure S2 ). X 1 – d Ala – d Val – d Ala–Trp – d Orn – X 7 – d Orn–Val – d Tyr This consensus sequence presents variability in some positions of the assembly line. For instance, the presence of a stand-alone A-domain could indicate that X 1 may have a nonamino acidic nature, which is consistent with the absence of its detection by MALDI-TOF MS/MS. Additionally, antiSMASH was not able to predict the two noncanonical amino acids that are being incorporated in the assembly line (i.e., d -Orn), although they were detected by MALDI-TOF MS/MS and successfully predicted by the complementary bioinformatic tool PRISM. 23 Moreover, position X 10 could be a d -Tyr or a tyrosine modified with a nitro group (NO 2 -Tyr), due to the presence of tailoring enzymes within the BGC responsible for this modification. Finally, the genetic predictions related to the NRPS #1.8 and the functional evidence of the formation of a peptide in crude extracts of strain H-KF8 suggests that the predicted peptide core could be further decorated with sugar or amino-sugar moieties, indicating that Streptomyces sp. H-KF8 is able to produce a natural product of complex nature, putatively a cyclic glycodecapeptide. Further chemical diversity based on the presence of noncanonical amino acids and epimerization domains is conceivable. All of the above-mentioned hypotheses will remain to be confirmed; however, we used this consensus sequence as a starting point to synthesize naturally inspired bioactive peptides that could be proposed as novel therapeutic agents. Peptide Design and Synthesis Based on the predicted consensus sequence, five different linear peptides were generated ( L1, L2, L2-K, L3 , and L3-K , Figure 3 a). The presence of the TE-domain suggested the presence of cyclic peptides. Therefore, three cyclic versions were synthesized, representing the cyclic forms of peptides L1, L2 , and L3 ( C1–C3 , Figure. 3 b). Since, no certain prediction could be done for the C - and N -terminal amino acids, the first peptide contained only eight amino acids ( L1 , Figure 3 a) with respect to the predicted core sequence. For the second peptide d -Asp was exchanged with ornithine (Orn) ( L2 , Figure 3 a), due to the possible variation in the PRISM analysis. The third peptide was composed of 10 amino acids, where the first amino acid is Trp, and the tenth amino acid is Tyr-NO 2 ( L3 , Figure 3 a). For the on-resin head-to-tail cyclization via side chain ( Figure 3 b), a C -terminal Lys was introduced, while glutaric anhydride was coupled to the N -terminus, which results in a carboxy group. 24 For comparison regarding the cyclic peptides, two linear peptides L2-K and L3-K were synthesized, which carried no C-terminal Lys, aiming to study the influence of positively charged amino acid at the C-terminus. All designed peptides were synthesized following the standard Fmoc-based solid-phase peptide synthesis (SPPS) protocol, purified, and characterized by RP-HPLC, LC-MS, and amino acid analysis ( Figures S3–S11, Tables S2,S3 ). Analytical data and physicochemical properties of the synthesized peptides are summarized in Tables 1 and S2 . Amino acid analysis revealed that the peptide content was around 94%, which was considered for concentration calculations. Antimicrobial Activity Bioactivity of chemically synthesized AMPs is usually determined by applying antimicrobial susceptibility testing (e.g., broth dilution testing) to determine the minimum inhibitory concentration (MIC), which is standardized for small molecules. 25 However, in AMP discovery, this approach faces limitations since many peptides by nature are not as stable as small organic molecules and the complex media composition suitable for bacterial growth in the lab may (i) affect a peptide’s bioactivity and (ii) not represent the actual infection environment. 26 , 27 As a consequence, many potent AMPs can be mistakenly discarded and compounds with a novel mode of action and novel targets will be overlooked, which is detrimental in the face of the current antimicrobial resistance crisis. Therefore, the research community is adapting the conditions to determine the bioactivity of AMPs, which can make it difficult to compare data in the literature landscape from one discovery to another and from small molecules to peptides. 26 One of the alternative and reliable methods is to determine AMPs’ minimal microbicidal concentration, i.e., the lowest concentration killing 99% of the inoculum (MMC 99 ), 28 which is presented in this study alongside the MIC values. The antimicrobial activities of the peptides were investigated against Gram-positive S. aureus , Gram-negative E. coli , and the yeast Candida albicans . L2 , L3 , L3-K , and C3 showed antimicrobial activity against the tested microorganisms ( Table 2 ). The values obtained for new peptides were compared with well-known antimicrobials, such as fusidic acid, polymyxin B, and clotrimazole. The MMC 99 study ( Table 2 ) shows that while L2 was active against C. albicans at an MMC 99 of 25 μg/mL and much less active against S. aureus , L3 and L3-K were found to be active against all three species. L3 was the most active peptide against S. aureus , E. coli , and C. albicans , showing MMC 99 values ranging from 6.3 to 25 μg/mL. L3-K , lacking the C-terminal lysine, which was introduced for the cyclization of L3 , still showed activity, although at higher concentrations of 12.5–50 μg/mL. Interestingly, upon cyclization ( C3 ) the peptide lost again activity, now being only active against E. coli at concentrations ≥ 100 μg/mL. It can be concluded that the length and charge of the linear peptides ( Table 1 ) have only a minor impact on antimicrobial activity, while structural changes, caused by the sequence variations and cyclization, are most likely the reasons behind altered antibacterial activity. Taken together, the peptides L3 and L3-K show the most promising minimum microbicidal concentrations for all three strains tested, compared to the well-known commercially available antimicrobials. Fusidic acid, with its time decreasing MMC 99 value for S. aureus from 25 to 1.6 μg/mL, shows different kinetics over time compared to new compounds L3 and L3-K , where the MMC 99 value stays constant over 24 h or slightly increases, pointing to the differences in mechanism of action. Polymyxin B outperforms by its MMC 99 values in E. coli , though low MMC 99 values for peptides L3 and L3-K and its decreased values over time suggest here as well the differences in mechanism of action. In contrast, clotrimazole shows considerably higher MMC 99 values compared to L3 , L3-K , and L2 , indicating that new peptides possess different mode of action against C. albicans . The MIC values for all organisms studied are expectedly noticeably higher than the MMC 99 values due to the media used. The lowest MIC values were observed for L3 peptide at concentration of 64 μg/mL in E. coli , 70 μg/mL in C. albicans , and 248 μg/mL in S. aureus . The values are raising higher when the Lys is absent in the peptide sequence ( Table 2 ), which represents similar trend observed for the MMC 99 values. MIC values were considered for further experiments as mode of action studies, hemolysis and cytotoxicity of studied peptides. Salt Resistance of Peptides Salt sensitivity is one of the well-known limiting factors that influence microbicidal activity of AMPs and limits their initial application as novel antibiotics, 29 a problem that can be circumvented by using a peptidomimetic approach. 5 Here, the newly identified natural peptides were studied to determine their initial salt stability. Biological salt stability was tested for the two most potent peptides, L2 and L3 , by adding either 85 or 150 mM NaCl to the growth medium of the MMC 99 assay ( Table S6 ). While L2 completely lost its activity in the presence of both salt concentrations, L3 retained moderate activity (MMC 99 50–200 μg/mL) at 85 mM NaCl against C. albicans , but not against E. coli , or S. aureus . These results show that the peptides are not salt resistant, which seems to be surprising because these peptides were predicted from the seawater organism Streptomyces sp. HKF8. However, this might be a consequence of simplifying the predicted structures to the peptide core or the uncertainty of the structure predictions based on the genome analysis. Peptide Stability in Serum Peptide stability in serum is another limiting factor for AMPs as novel antibiotics. 30 The peptide stability of the synthesized peptides ( L2 , L3 , L3-K , and C3 ) was investigated in human serum using HPLC after 0, 0.5, 1, 4, and 24 h. The results show that the peptides are stable after 24 h of incubation in human serum at 37 °C, indicated by the consistent signal of the individual peptide in their chromatograms ( Figure S13 ). After 24 h, the chromatograms for L2 , L3 , and L3-K show slight peak shape differences. L2 develops a small shoulder with an overall volume percentage of 0.4%. The chromatograms of L3 and L3-K show small additional peaks with a total volume of 0.6% ( L3 ) and 1.3% ( L3-K ). The chromatogram for C3 does not show any additional signal after 24 h. Hemolysis Since antimicrobial peptides are known to disturb the cell membrane integrity, their hemolytic activity on human erythrocytes has been used as an indication of their toxicity. The hemolytic activity of the synthesized peptides L2 , L3 , L3-K , and C3 was tested against fresh human erythrocytes from blood donors post peptide exposure ( Figure S14 ). The hemolytic activity of the four peptides was performed in PBS buffer at pH 5, since L3 , L3-K , and C3 developed a clear yellow color at pH 7.4 in PBS buffer, due to an internal hydrogen bond formation related to Y-NO 2 with a p K a value of 7.1. 31 The lower pH value removed the yellow color while leaving the hemoglobin absorbance unaffected. As a result of the study, only L3 exceeded the background level with a hemolytic activity of 4% at the highest concentration ( Figure S14 ). Additionally, the absence or very low hemolytic activity is to be expected since all the peptides were sensitive to physiological salt concentration. Cytotoxicity To confirm the peptide selectivity toward bacteria cells, cytotoxicity assays of synthesized peptides L2 , L3 , L3-K , and C3 against human embryonic kidney (HEK) and hepatoblastoma (HepG2) were performed ( Figure 4 ). The cell viability was assessed using resazurin 24 h post peptide exposure. The line at 70% cell viability marks out the threshold for cytotoxic potential compared to the negative control. Peptides L2 , L3 , and L3-K showed cell viability significantly greater than the threshold, indicating no cytotoxicity at any tested concentration. Only C3 at the highest tested concentration seems to have cytotoxic properties with cell viability similar to the positive control. Secondary Structure Elucidation To gain insight into the peptide’s possible mechanisms of action, the secondary structure of linear peptides L1 , L2 , and L3 ( Figure S12 ) was analyzed using CD spectroscopy, whereas NMR structure determination was conducted for L1 , L2 , L3 , C2 , and C3 ( Figure 5 ). The CD spectra of the three peptides do not resemble exactly the typical spectroscopic features of β-sheet, α-helical, or turn-harboring peptides ( Figure S12 ). Despite the fact that the CD spectra of short peptides with unnatural amino acids are difficult to interpret, the shape of the absorbance and the absorbance maxima around 225 nm for peptides L2 and L3 seem to indicate a left-handed α-helix as it appears to be a mirror image of an α-helix containing peptides. 32 However, the CD spectroscopic similarity between L2 and L3 clearly indicates some structural similarity ( Figure S12 ). In contrast, the CD spectrum of L1 did not indicate a defined secondary structure. Henceforth, NMR spectroscopy was pursued for a more detailed structural analysis of the peptides. Linear peptides L1 , L2 , and L3 possess a half-helix turn-like core, while the C - and N -termini remain unstructured and flexible ( Figure 5 a), as indicated by the ensemble backbone (bb) root-mean-square deviation (RMSD) ranging between 0.8 and 1.5 Å. Moreover, all peptides appear to be divided into a hydrophobic N -terminus and a more polar part at the C -terminus ( Figure 5 b). Interestingly, peptide C2 , which is the cyclic analog of peptide L2 , rigidified significantly upon cyclization as indicated by a decrease of the bb RMSD from 1.5 to 0.6 Å and shows a well-defined structure ( Figure 5 a). In contrast, peptide C3 , which is the cyclic counterpart for L3 became more flexible, as reflected by the increased bb RMSD from 0.8 to 1.9 Å ( Figure 5 a). In parallel to the NOE-based structure analysis of the peptides, the temperature dependence of the NH chemical shifts was investigated to derive the temperature coefficients of the backbone NH protons to identify hydrogen bonds ( Table S4 ). Two internal hydrogen bonds were found for peptide L1 (Asp 5 and Thr 6) and one hydrogen bond for C2 (Thr 6). With respect to the calculated NMR structure for peptide L1 , Asp 5 most likely forms a hydrogen bond with its side chain, while Thr 6 forms a hydrogen bond with the amide oxygen of Ala 3. In the case of peptide C2 , the hydrogen bond acceptor for the NH proton of Thr 6 is most likely the oxygen atom of the N -terminal amide, thus, partly responsible for its low flexibility. Although, peptides L1 and L2 are structurally similar to L3 ( Figure 5 c), it is L3 , which shows potent antimicrobial activity. Furthermore, the cyclization of L2 and L3 resulted in a changed globular shape of the molecule ( Figure 5 c–e), which might be the main reason for the loss in activity of C2 and C3 . Bacterial Cytological Profiling To gain insight into the peptides’ antimicrobial mechanisms, bacterial cytological profiling was performed. This live-cell imaging method makes use of different fluorescent dyes and protein fusions together with phase contrast microscopy to assess the phenotype of bacterial cells after antibiotic treatment. 33 Single-cell analysis then gives insight into the extent and population heterogeneity of the observed phenotypic effects. In this study, we used the DNA dye DAPI and the membrane dye FM4-64 and analyzed the effects of the compounds on cell length, nucleoid compaction, and membrane morphology ( Figures 6 and S15 ). To this end, E. coli CCUG31246 (uropathogenic clinical isolate) was chosen as a representative model and the peptides L3 , L3-K , and C3 , which showed activity against E. coli in the MMC assay, were tested. In the preparation of mechanistic studies, MICs were determined. For further experiments, 1× MIC was used for each peptide (64 μg/mL L3 , 128 μg/mL L3-K , and 512 μg/mL C3 ). The lipopeptide polymyxin B, which permeabilizes both the inner and the outer membrane of Gram-negative bacteria, was used as a positive control (10 μg/mL). Cells were microscopically examined after 10 and 60 min of peptide treatment. No marked effects were observed on cell length. Only L3-K showed very slightly shorter cells on average after 60 min of treatment ( Figure S16 ). In contrast, the DAPI dye indicated that DNA compaction to be affected by all peptides ( Figure 6 a). Quantification of nucleoid compaction revealed that all peptides caused clear nucleoid relaxation after only 10 min of treatment ( Figure 6 b). This effect was even more apparent after 60 min for both L3 and L3-K . This observation indicates that C3 , L3 , and L3-K affect DNA packing in a manner similar to polymyxin B. It should be noted that nucleoid relaxation is not a common phenotype caused by AMPs, e.g., tyrocidines and gramicidin S have been shown to have the opposite effect on bacteria. 34 Interestingly, C3 displayed a considerable population heterogeneity after 60 min of treatment, showing individual cells with normal, relaxed, and condensed nucleoids. This could be indicative of cells in different stages of inhibition or due to different reactions of individual cells of an inherently heterogeneous bacterial population. Clear effects were also observed on the bacterial membrane morphology in the FM4-64 stain ( Figure 6 a). All three peptides showed two subpopulations with distinct phenotypes: those with strongly fluorescent membrane foci (white arrows, Figure 6 a) and those where membrane staining was reduced or not visible at all (yellow arrow, Figure 6 a). Most membrane dyes, including FM4-64, prefer more fluid membrane regions and accumulate in those areas when phase separation occurs, appearing as intensely fluorescent foci. Conversely, membrane dyes are often depleted from rigid membrane regions or show less intense fluorescence in rigid membrane environments. 35 , 36 Thus, our results point to membrane phase separation in cells with bright foci and possibly increased membrane rigidity in cells with a very weak membrane stain. When quantifying these phenotypes, a clear trend toward a higher proportion of cells with reduced membrane staining at 60 min compared to 10 min became apparent ( Figure 6 c), suggesting a two-stage effect, where cells first undergo a transient membrane phase separation, possibly followed by overall membrane rigidification. However, it must be noted that FM4-64 binds to both the inner and the outer membrane of Gram-negative bacteria, depending on inner membrane accessibility, 37 , 38 and thus, does not allow reliable distinction between inner and outer membrane effects. The positive control polymyxin B displayed the same distinct phenotypes, yet over 90% of cells displayed the unstained phenotype already after 10 min, suggesting that it acts much faster than L3 and L3-K . In line with the DAPI results, C3 showed considerable population heterogeneity as well as sample-to-sample variation in the FM4-64 stain, which is reflected by the high error bars in Figure 6 c. Due to the overall similarity of the peptides’ cytological profiles to that of polymyxin B, we further tested their ability to form pores in the cell membrane. To this end, we used the fluorescence probe propidium iodide, which cannot cross intact membranes but can enter cells through pores of sufficient size. 39 Pore-forming peptides, such as polymyxin B, lead to near-instantaneous uptake of the dye throughout the bacterial population. This effect was indeed observed here with polymyxin B, but only a small subpopulation of cells treated with C3 , L3 , and L3-K showed increased fluorescence ( Figure 7 a), suggesting that the peptides do not act by pore formation. While the proportion of fluorescent cells increased after 60 min, a high number of cells (31% for L3 , 35% for L3-K , 64% for C3 ) remained unstained, indicating that the single red-stained cells are most likely perforated due to undergoing cell lysis as a consequence of peptide-induced cell death. Propidium iodide is a large organic molecule that is not suitable to detect smaller ion-conducting pores or channels induced by antimicrobials. To assess whether the peptides may form smaller membrane pores sufficient for the passage of ions, we tested their effects on the membrane potential using the fluorescence probe DiSC(3)5. 40 This dye accumulates in the cell membrane in a membrane potential-dependent manner and disassociates, when the membrane potential is dissipated. If a pore, large or small, is formed in the cell membrane, a clear and immediate reduction in the DiSC(3)5 fluorescence intensity is observed. This effect can be clearly seen with polymyxin B ( Figure 7 b). In contrast, C3 had no effect on the membrane potential after 10 min and caused partial depolarization after 60 min. L3 and L3-K displayed an increased fluorescence signal at 10 min and a heterogeneous population of partially depolarized cells and cells with a higher fluorescence signal after 60 min. This behavior may be indicative of outer membrane permeabilization, as DiSC(3)5 has only limited outer membrane permeability, and its uptake is increased when the outer membrane is permeabilized, resulting in a heterogeneous cell population with an overall higher fluorescence signal. This effect is, for example, observed after short treatment times with polymyxin B (data not shown). It is conceivable that L3 and L3-K similarly permeabilize the outer membrane, yet at a much slower time scale. To test this hypothesis, we employed an E. coli strain that overexpresses the outer membrane porin FhuA, making these cells more permeable to most fluorescence dyes including DiSC(3)5. 41 Indeed, the consecutive increase and decrease of fluorescence intensity observed in the wild type was strongly reduced in the FhuA-overexpressing strain ( Figure 7 c). This observation suggests that L3 and L3-K first permeabilize the outer and then the inner membrane. Interestingly, in the FhuA-overexpressing strain, L3 completely depolarized most cells after 10 min, while the effect of L3-K only set in after 60 min and remained heterogeneous. This shows that these two peptides in principle have very different inner membrane permeabilization kinetics. These do not become apparent in the wild type, where the outer membrane is fully intact, yet it will be interesting to take into consideration when these qualified hit structures will be modified for future drug development. C3 behaved similarly in both E. coli strains, showing stronger depolarization in the FhuA-overexpressing strain, which suggests that it is partially retained by the outer membrane. This effect together with the absence of highly fluorescent cells in the wild-type samples suggests that, in contrast to L3 and L3-K , this peptide does not notably permeabilize the outer membrane. Taken together, our data show that L3 and L3-K affect both the outer membrane and inner membrane of Gram-negative bacteria. They do not form pores large enough for efficient uptake of the propidium iodide probe but allow the passage of small ions, resulting in dissipation of the membrane potential. Thereby, they act on a much slower scale than polymyxin B and do not cause complete membrane depolarization. Since a pore would cause immediate and complete depolarization, we can conclude that L3 and L3-K instead slowly increase the passive permeability of the cell membrane. Together with our FM4-64 stain, we can hypothesize that this may be due to phase boundary defects caused by membrane phase separation. C3 showed similar effects on membrane phase separation. It did not show any effect on the outer membrane and had only mild effects on the membrane potential. These differences suggest that C3 probably acts similarly to L3 and L3-K but is specific for the inner membrane, while the other two peptides display a dual activity on both membranes of E. coli . All three peptides cause relaxation of the nucleoid, which is indicative of DNA packing defects. The same effects were observed for polymyxin B, suggesting that this could be a yet unknown consequence of their interaction with the inner membrane. However, this is not a general effect of membrane-targeting antimicrobial peptides, and an additional independent mechanism, possibly involving peptide translocation into the cytosol and interaction with DNA, cannot be excluded at this stage.
Results and Discussion Genome Mining of Streptomyces sp. H-KF8 An updated genome mining analysis was performed using antiSMASH v6.0. 22 and confirmed the low similarity of both NRPS biosynthetic gene clusters mentioned above. 17 Among them, the NRPS BGC #1.8 presented novel genetic features, and therefore, was selected for further analysis ( Figure 2 and Table S1 ). The NRPS #1.8 ( Figure 2 ) is composed of 31 genes arranged in a BGC of 77 237 bp of total length. It harbors two nrps biosynthetic genes, where 10 adenylation domains (A-domain) were detected, nine representing complete modules and one stand-alone domain. A thioesterase domain (TE-domain) was found contiguous to the nrps genes, suggesting a final step of releasing and cyclization of the peptide chain. Genomic prediction suggested that the putative product of this pathway could be a decapeptide with six d -amino acids, due to the presence of six epimerization domains (E-domain) within nrps genes of this BGC ( Figure 2 ). Moreover, the analysis indicates that this BGC has more than one resistance protein. This could indicate that the peptide formed has more than one possible mechanism for inhibition. To confirm the bioinformatic prediction and identify the peptide’s primary structure, LC-MS analysis of the bioactive extract of Streptomyces sp. H-KF8 was conducted ( Figure S1 ). MALDI-TOF MS/MS confirmed the presence of 8 out of the 10 predicted amino acids, leading to the prediction of the following consensus sequence ( Figure S2 ). X 1 – d Ala – d Val – d Ala–Trp – d Orn – X 7 – d Orn–Val – d Tyr This consensus sequence presents variability in some positions of the assembly line. For instance, the presence of a stand-alone A-domain could indicate that X 1 may have a nonamino acidic nature, which is consistent with the absence of its detection by MALDI-TOF MS/MS. Additionally, antiSMASH was not able to predict the two noncanonical amino acids that are being incorporated in the assembly line (i.e., d -Orn), although they were detected by MALDI-TOF MS/MS and successfully predicted by the complementary bioinformatic tool PRISM. 23 Moreover, position X 10 could be a d -Tyr or a tyrosine modified with a nitro group (NO 2 -Tyr), due to the presence of tailoring enzymes within the BGC responsible for this modification. Finally, the genetic predictions related to the NRPS #1.8 and the functional evidence of the formation of a peptide in crude extracts of strain H-KF8 suggests that the predicted peptide core could be further decorated with sugar or amino-sugar moieties, indicating that Streptomyces sp. H-KF8 is able to produce a natural product of complex nature, putatively a cyclic glycodecapeptide. Further chemical diversity based on the presence of noncanonical amino acids and epimerization domains is conceivable. All of the above-mentioned hypotheses will remain to be confirmed; however, we used this consensus sequence as a starting point to synthesize naturally inspired bioactive peptides that could be proposed as novel therapeutic agents. Peptide Design and Synthesis Based on the predicted consensus sequence, five different linear peptides were generated ( L1, L2, L2-K, L3 , and L3-K , Figure 3 a). The presence of the TE-domain suggested the presence of cyclic peptides. Therefore, three cyclic versions were synthesized, representing the cyclic forms of peptides L1, L2 , and L3 ( C1–C3 , Figure. 3 b). Since, no certain prediction could be done for the C - and N -terminal amino acids, the first peptide contained only eight amino acids ( L1 , Figure 3 a) with respect to the predicted core sequence. For the second peptide d -Asp was exchanged with ornithine (Orn) ( L2 , Figure 3 a), due to the possible variation in the PRISM analysis. The third peptide was composed of 10 amino acids, where the first amino acid is Trp, and the tenth amino acid is Tyr-NO 2 ( L3 , Figure 3 a). For the on-resin head-to-tail cyclization via side chain ( Figure 3 b), a C -terminal Lys was introduced, while glutaric anhydride was coupled to the N -terminus, which results in a carboxy group. 24 For comparison regarding the cyclic peptides, two linear peptides L2-K and L3-K were synthesized, which carried no C-terminal Lys, aiming to study the influence of positively charged amino acid at the C-terminus. All designed peptides were synthesized following the standard Fmoc-based solid-phase peptide synthesis (SPPS) protocol, purified, and characterized by RP-HPLC, LC-MS, and amino acid analysis ( Figures S3–S11, Tables S2,S3 ). Analytical data and physicochemical properties of the synthesized peptides are summarized in Tables 1 and S2 . Amino acid analysis revealed that the peptide content was around 94%, which was considered for concentration calculations. Antimicrobial Activity Bioactivity of chemically synthesized AMPs is usually determined by applying antimicrobial susceptibility testing (e.g., broth dilution testing) to determine the minimum inhibitory concentration (MIC), which is standardized for small molecules. 25 However, in AMP discovery, this approach faces limitations since many peptides by nature are not as stable as small organic molecules and the complex media composition suitable for bacterial growth in the lab may (i) affect a peptide’s bioactivity and (ii) not represent the actual infection environment. 26 , 27 As a consequence, many potent AMPs can be mistakenly discarded and compounds with a novel mode of action and novel targets will be overlooked, which is detrimental in the face of the current antimicrobial resistance crisis. Therefore, the research community is adapting the conditions to determine the bioactivity of AMPs, which can make it difficult to compare data in the literature landscape from one discovery to another and from small molecules to peptides. 26 One of the alternative and reliable methods is to determine AMPs’ minimal microbicidal concentration, i.e., the lowest concentration killing 99% of the inoculum (MMC 99 ), 28 which is presented in this study alongside the MIC values. The antimicrobial activities of the peptides were investigated against Gram-positive S. aureus , Gram-negative E. coli , and the yeast Candida albicans . L2 , L3 , L3-K , and C3 showed antimicrobial activity against the tested microorganisms ( Table 2 ). The values obtained for new peptides were compared with well-known antimicrobials, such as fusidic acid, polymyxin B, and clotrimazole. The MMC 99 study ( Table 2 ) shows that while L2 was active against C. albicans at an MMC 99 of 25 μg/mL and much less active against S. aureus , L3 and L3-K were found to be active against all three species. L3 was the most active peptide against S. aureus , E. coli , and C. albicans , showing MMC 99 values ranging from 6.3 to 25 μg/mL. L3-K , lacking the C-terminal lysine, which was introduced for the cyclization of L3 , still showed activity, although at higher concentrations of 12.5–50 μg/mL. Interestingly, upon cyclization ( C3 ) the peptide lost again activity, now being only active against E. coli at concentrations ≥ 100 μg/mL. It can be concluded that the length and charge of the linear peptides ( Table 1 ) have only a minor impact on antimicrobial activity, while structural changes, caused by the sequence variations and cyclization, are most likely the reasons behind altered antibacterial activity. Taken together, the peptides L3 and L3-K show the most promising minimum microbicidal concentrations for all three strains tested, compared to the well-known commercially available antimicrobials. Fusidic acid, with its time decreasing MMC 99 value for S. aureus from 25 to 1.6 μg/mL, shows different kinetics over time compared to new compounds L3 and L3-K , where the MMC 99 value stays constant over 24 h or slightly increases, pointing to the differences in mechanism of action. Polymyxin B outperforms by its MMC 99 values in E. coli , though low MMC 99 values for peptides L3 and L3-K and its decreased values over time suggest here as well the differences in mechanism of action. In contrast, clotrimazole shows considerably higher MMC 99 values compared to L3 , L3-K , and L2 , indicating that new peptides possess different mode of action against C. albicans . The MIC values for all organisms studied are expectedly noticeably higher than the MMC 99 values due to the media used. The lowest MIC values were observed for L3 peptide at concentration of 64 μg/mL in E. coli , 70 μg/mL in C. albicans , and 248 μg/mL in S. aureus . The values are raising higher when the Lys is absent in the peptide sequence ( Table 2 ), which represents similar trend observed for the MMC 99 values. MIC values were considered for further experiments as mode of action studies, hemolysis and cytotoxicity of studied peptides. Salt Resistance of Peptides Salt sensitivity is one of the well-known limiting factors that influence microbicidal activity of AMPs and limits their initial application as novel antibiotics, 29 a problem that can be circumvented by using a peptidomimetic approach. 5 Here, the newly identified natural peptides were studied to determine their initial salt stability. Biological salt stability was tested for the two most potent peptides, L2 and L3 , by adding either 85 or 150 mM NaCl to the growth medium of the MMC 99 assay ( Table S6 ). While L2 completely lost its activity in the presence of both salt concentrations, L3 retained moderate activity (MMC 99 50–200 μg/mL) at 85 mM NaCl against C. albicans , but not against E. coli , or S. aureus . These results show that the peptides are not salt resistant, which seems to be surprising because these peptides were predicted from the seawater organism Streptomyces sp. HKF8. However, this might be a consequence of simplifying the predicted structures to the peptide core or the uncertainty of the structure predictions based on the genome analysis. Peptide Stability in Serum Peptide stability in serum is another limiting factor for AMPs as novel antibiotics. 30 The peptide stability of the synthesized peptides ( L2 , L3 , L3-K , and C3 ) was investigated in human serum using HPLC after 0, 0.5, 1, 4, and 24 h. The results show that the peptides are stable after 24 h of incubation in human serum at 37 °C, indicated by the consistent signal of the individual peptide in their chromatograms ( Figure S13 ). After 24 h, the chromatograms for L2 , L3 , and L3-K show slight peak shape differences. L2 develops a small shoulder with an overall volume percentage of 0.4%. The chromatograms of L3 and L3-K show small additional peaks with a total volume of 0.6% ( L3 ) and 1.3% ( L3-K ). The chromatogram for C3 does not show any additional signal after 24 h. Hemolysis Since antimicrobial peptides are known to disturb the cell membrane integrity, their hemolytic activity on human erythrocytes has been used as an indication of their toxicity. The hemolytic activity of the synthesized peptides L2 , L3 , L3-K , and C3 was tested against fresh human erythrocytes from blood donors post peptide exposure ( Figure S14 ). The hemolytic activity of the four peptides was performed in PBS buffer at pH 5, since L3 , L3-K , and C3 developed a clear yellow color at pH 7.4 in PBS buffer, due to an internal hydrogen bond formation related to Y-NO 2 with a p K a value of 7.1. 31 The lower pH value removed the yellow color while leaving the hemoglobin absorbance unaffected. As a result of the study, only L3 exceeded the background level with a hemolytic activity of 4% at the highest concentration ( Figure S14 ). Additionally, the absence or very low hemolytic activity is to be expected since all the peptides were sensitive to physiological salt concentration. Cytotoxicity To confirm the peptide selectivity toward bacteria cells, cytotoxicity assays of synthesized peptides L2 , L3 , L3-K , and C3 against human embryonic kidney (HEK) and hepatoblastoma (HepG2) were performed ( Figure 4 ). The cell viability was assessed using resazurin 24 h post peptide exposure. The line at 70% cell viability marks out the threshold for cytotoxic potential compared to the negative control. Peptides L2 , L3 , and L3-K showed cell viability significantly greater than the threshold, indicating no cytotoxicity at any tested concentration. Only C3 at the highest tested concentration seems to have cytotoxic properties with cell viability similar to the positive control. Secondary Structure Elucidation To gain insight into the peptide’s possible mechanisms of action, the secondary structure of linear peptides L1 , L2 , and L3 ( Figure S12 ) was analyzed using CD spectroscopy, whereas NMR structure determination was conducted for L1 , L2 , L3 , C2 , and C3 ( Figure 5 ). The CD spectra of the three peptides do not resemble exactly the typical spectroscopic features of β-sheet, α-helical, or turn-harboring peptides ( Figure S12 ). Despite the fact that the CD spectra of short peptides with unnatural amino acids are difficult to interpret, the shape of the absorbance and the absorbance maxima around 225 nm for peptides L2 and L3 seem to indicate a left-handed α-helix as it appears to be a mirror image of an α-helix containing peptides. 32 However, the CD spectroscopic similarity between L2 and L3 clearly indicates some structural similarity ( Figure S12 ). In contrast, the CD spectrum of L1 did not indicate a defined secondary structure. Henceforth, NMR spectroscopy was pursued for a more detailed structural analysis of the peptides. Linear peptides L1 , L2 , and L3 possess a half-helix turn-like core, while the C - and N -termini remain unstructured and flexible ( Figure 5 a), as indicated by the ensemble backbone (bb) root-mean-square deviation (RMSD) ranging between 0.8 and 1.5 Å. Moreover, all peptides appear to be divided into a hydrophobic N -terminus and a more polar part at the C -terminus ( Figure 5 b). Interestingly, peptide C2 , which is the cyclic analog of peptide L2 , rigidified significantly upon cyclization as indicated by a decrease of the bb RMSD from 1.5 to 0.6 Å and shows a well-defined structure ( Figure 5 a). In contrast, peptide C3 , which is the cyclic counterpart for L3 became more flexible, as reflected by the increased bb RMSD from 0.8 to 1.9 Å ( Figure 5 a). In parallel to the NOE-based structure analysis of the peptides, the temperature dependence of the NH chemical shifts was investigated to derive the temperature coefficients of the backbone NH protons to identify hydrogen bonds ( Table S4 ). Two internal hydrogen bonds were found for peptide L1 (Asp 5 and Thr 6) and one hydrogen bond for C2 (Thr 6). With respect to the calculated NMR structure for peptide L1 , Asp 5 most likely forms a hydrogen bond with its side chain, while Thr 6 forms a hydrogen bond with the amide oxygen of Ala 3. In the case of peptide C2 , the hydrogen bond acceptor for the NH proton of Thr 6 is most likely the oxygen atom of the N -terminal amide, thus, partly responsible for its low flexibility. Although, peptides L1 and L2 are structurally similar to L3 ( Figure 5 c), it is L3 , which shows potent antimicrobial activity. Furthermore, the cyclization of L2 and L3 resulted in a changed globular shape of the molecule ( Figure 5 c–e), which might be the main reason for the loss in activity of C2 and C3 . Bacterial Cytological Profiling To gain insight into the peptides’ antimicrobial mechanisms, bacterial cytological profiling was performed. This live-cell imaging method makes use of different fluorescent dyes and protein fusions together with phase contrast microscopy to assess the phenotype of bacterial cells after antibiotic treatment. 33 Single-cell analysis then gives insight into the extent and population heterogeneity of the observed phenotypic effects. In this study, we used the DNA dye DAPI and the membrane dye FM4-64 and analyzed the effects of the compounds on cell length, nucleoid compaction, and membrane morphology ( Figures 6 and S15 ). To this end, E. coli CCUG31246 (uropathogenic clinical isolate) was chosen as a representative model and the peptides L3 , L3-K , and C3 , which showed activity against E. coli in the MMC assay, were tested. In the preparation of mechanistic studies, MICs were determined. For further experiments, 1× MIC was used for each peptide (64 μg/mL L3 , 128 μg/mL L3-K , and 512 μg/mL C3 ). The lipopeptide polymyxin B, which permeabilizes both the inner and the outer membrane of Gram-negative bacteria, was used as a positive control (10 μg/mL). Cells were microscopically examined after 10 and 60 min of peptide treatment. No marked effects were observed on cell length. Only L3-K showed very slightly shorter cells on average after 60 min of treatment ( Figure S16 ). In contrast, the DAPI dye indicated that DNA compaction to be affected by all peptides ( Figure 6 a). Quantification of nucleoid compaction revealed that all peptides caused clear nucleoid relaxation after only 10 min of treatment ( Figure 6 b). This effect was even more apparent after 60 min for both L3 and L3-K . This observation indicates that C3 , L3 , and L3-K affect DNA packing in a manner similar to polymyxin B. It should be noted that nucleoid relaxation is not a common phenotype caused by AMPs, e.g., tyrocidines and gramicidin S have been shown to have the opposite effect on bacteria. 34 Interestingly, C3 displayed a considerable population heterogeneity after 60 min of treatment, showing individual cells with normal, relaxed, and condensed nucleoids. This could be indicative of cells in different stages of inhibition or due to different reactions of individual cells of an inherently heterogeneous bacterial population. Clear effects were also observed on the bacterial membrane morphology in the FM4-64 stain ( Figure 6 a). All three peptides showed two subpopulations with distinct phenotypes: those with strongly fluorescent membrane foci (white arrows, Figure 6 a) and those where membrane staining was reduced or not visible at all (yellow arrow, Figure 6 a). Most membrane dyes, including FM4-64, prefer more fluid membrane regions and accumulate in those areas when phase separation occurs, appearing as intensely fluorescent foci. Conversely, membrane dyes are often depleted from rigid membrane regions or show less intense fluorescence in rigid membrane environments. 35 , 36 Thus, our results point to membrane phase separation in cells with bright foci and possibly increased membrane rigidity in cells with a very weak membrane stain. When quantifying these phenotypes, a clear trend toward a higher proportion of cells with reduced membrane staining at 60 min compared to 10 min became apparent ( Figure 6 c), suggesting a two-stage effect, where cells first undergo a transient membrane phase separation, possibly followed by overall membrane rigidification. However, it must be noted that FM4-64 binds to both the inner and the outer membrane of Gram-negative bacteria, depending on inner membrane accessibility, 37 , 38 and thus, does not allow reliable distinction between inner and outer membrane effects. The positive control polymyxin B displayed the same distinct phenotypes, yet over 90% of cells displayed the unstained phenotype already after 10 min, suggesting that it acts much faster than L3 and L3-K . In line with the DAPI results, C3 showed considerable population heterogeneity as well as sample-to-sample variation in the FM4-64 stain, which is reflected by the high error bars in Figure 6 c. Due to the overall similarity of the peptides’ cytological profiles to that of polymyxin B, we further tested their ability to form pores in the cell membrane. To this end, we used the fluorescence probe propidium iodide, which cannot cross intact membranes but can enter cells through pores of sufficient size. 39 Pore-forming peptides, such as polymyxin B, lead to near-instantaneous uptake of the dye throughout the bacterial population. This effect was indeed observed here with polymyxin B, but only a small subpopulation of cells treated with C3 , L3 , and L3-K showed increased fluorescence ( Figure 7 a), suggesting that the peptides do not act by pore formation. While the proportion of fluorescent cells increased after 60 min, a high number of cells (31% for L3 , 35% for L3-K , 64% for C3 ) remained unstained, indicating that the single red-stained cells are most likely perforated due to undergoing cell lysis as a consequence of peptide-induced cell death. Propidium iodide is a large organic molecule that is not suitable to detect smaller ion-conducting pores or channels induced by antimicrobials. To assess whether the peptides may form smaller membrane pores sufficient for the passage of ions, we tested their effects on the membrane potential using the fluorescence probe DiSC(3)5. 40 This dye accumulates in the cell membrane in a membrane potential-dependent manner and disassociates, when the membrane potential is dissipated. If a pore, large or small, is formed in the cell membrane, a clear and immediate reduction in the DiSC(3)5 fluorescence intensity is observed. This effect can be clearly seen with polymyxin B ( Figure 7 b). In contrast, C3 had no effect on the membrane potential after 10 min and caused partial depolarization after 60 min. L3 and L3-K displayed an increased fluorescence signal at 10 min and a heterogeneous population of partially depolarized cells and cells with a higher fluorescence signal after 60 min. This behavior may be indicative of outer membrane permeabilization, as DiSC(3)5 has only limited outer membrane permeability, and its uptake is increased when the outer membrane is permeabilized, resulting in a heterogeneous cell population with an overall higher fluorescence signal. This effect is, for example, observed after short treatment times with polymyxin B (data not shown). It is conceivable that L3 and L3-K similarly permeabilize the outer membrane, yet at a much slower time scale. To test this hypothesis, we employed an E. coli strain that overexpresses the outer membrane porin FhuA, making these cells more permeable to most fluorescence dyes including DiSC(3)5. 41 Indeed, the consecutive increase and decrease of fluorescence intensity observed in the wild type was strongly reduced in the FhuA-overexpressing strain ( Figure 7 c). This observation suggests that L3 and L3-K first permeabilize the outer and then the inner membrane. Interestingly, in the FhuA-overexpressing strain, L3 completely depolarized most cells after 10 min, while the effect of L3-K only set in after 60 min and remained heterogeneous. This shows that these two peptides in principle have very different inner membrane permeabilization kinetics. These do not become apparent in the wild type, where the outer membrane is fully intact, yet it will be interesting to take into consideration when these qualified hit structures will be modified for future drug development. C3 behaved similarly in both E. coli strains, showing stronger depolarization in the FhuA-overexpressing strain, which suggests that it is partially retained by the outer membrane. This effect together with the absence of highly fluorescent cells in the wild-type samples suggests that, in contrast to L3 and L3-K , this peptide does not notably permeabilize the outer membrane. Taken together, our data show that L3 and L3-K affect both the outer membrane and inner membrane of Gram-negative bacteria. They do not form pores large enough for efficient uptake of the propidium iodide probe but allow the passage of small ions, resulting in dissipation of the membrane potential. Thereby, they act on a much slower scale than polymyxin B and do not cause complete membrane depolarization. Since a pore would cause immediate and complete depolarization, we can conclude that L3 and L3-K instead slowly increase the passive permeability of the cell membrane. Together with our FM4-64 stain, we can hypothesize that this may be due to phase boundary defects caused by membrane phase separation. C3 showed similar effects on membrane phase separation. It did not show any effect on the outer membrane and had only mild effects on the membrane potential. These differences suggest that C3 probably acts similarly to L3 and L3-K but is specific for the inner membrane, while the other two peptides display a dual activity on both membranes of E. coli . All three peptides cause relaxation of the nucleoid, which is indicative of DNA packing defects. The same effects were observed for polymyxin B, suggesting that this could be a yet unknown consequence of their interaction with the inner membrane. However, this is not a general effect of membrane-targeting antimicrobial peptides, and an additional independent mechanism, possibly involving peptide translocation into the cytosol and interaction with DNA, cannot be excluded at this stage.
Conclusions Based on genome analysis of Streptomyces sp. HKF8 isolated from a Northern Chilean Patagonia fjord and computational prediction by antiSMASH as well as secondary metabolite profiling, linear and cyclic peptides have been synthesized and studied toward their microbicidal activity, structural properties, and mechanisms of action. Our results show that though sequentially similar, the peptides show different microbicidal and structural properties. Peptide L3 showed the best killing activity for S. aureus , E. coli , and C. albicans in a range from 6.3 to 12.5 μg/mL. Interestingly, removing the basic amino acid Lys from the C -terminus ( L3-K ) and, therefore, decreasing the overall charge of the peptide resulted in a slight loss of microbicidal activity, while cyclization ( C3 ) had a dramatic effect on microbicidal activity. Cyclization of the peptide L3 also had a large impact on its backbone stability. Surprisingly, backbone RMSD increased from 0.8 Å for linear peptide L3 to 1.9 Å for its cyclic counterpart C3 . Generally, NMR structure analysis and determination revealed, that the peptides possess a half-helix turn-like core, while the C - and N -termini remain unstructured and flexible. First insights into the peptides’ mechanisms of action by performing bacterial cytological profiling using uropathogenic E. coli as model indicate that the most active peptides L3 and L3-K affect both the outer and inner membrane of Gram-negative bacteria. They do not form pores large enough for efficient uptake of the fluorescence probe propidium iodide. While both peptides allowed the passage of smaller ions, eventually resulting in dissipation of the membrane potential, only L3 showed this effect after 10 min while L3-K still did not cause complete depolarization at 60 min, showing that at least the latter cannot act through similar formation of ion-conducting pores. Furthermore, both peptides cause relaxation of the nucleoid, which is indicative of DNA packing defects. However, at this stage, this effect cannot yet be ascribed to either a consequence of their membrane interaction or an independent secondary activity. Taken together, this study represents a promising strategy to discover unknown serum stable and noncytotoxic antimicrobial qualified hit structures from marine terrain with novel modes of action, which can be a good starting point toward finding new core structures from previously unexplored natural sources.
Microorganisms within the marine environment have been shown to be very effective sources of naturally produced antimicrobial peptides (AMPs). Several nonribosomal peptides were identified based on genome mining predictions of Streptomyces sp. H-KF8, a marine Actinomycetota isolated from a remote Northern Chilean Patagonian fjord. Based on these predictions, a series of eight peptides, including cyclic peptides, were designed and chemically synthesized. Six of these peptides showed antimicrobial activity. Mode of action studies suggest that two of these peptides potentially act on the cell membrane via a novel mechanism allowing the passage of small ions, resulting in the dissipation of the membrane potential. This study shows that though structurally similar peptides, determined by NMR spectroscopy, the incorporation of small sequence mutations results in a dramatic influence on their bioactivity including mode of action. The qualified hit sequence can serve as a basis for more potent AMPs in future studies.
Antimicrobial resistance is one of the most serious public health threats nowadays, and combating pathogenic resistant bacteria is urgently needed. 1 Antimicrobial peptides (AMPs) represent a novel class of antimicrobial agents 2 that are produced by living organisms as nonspecific innate immune system modulators. 3 AMPs usually represent the first-line defense system showing direct microbicidal effects against many bacteria, fungi, parasites, and/or viruses. 4 They show a broad variety of structures and modes of action. Since peptides are metabolized to amino acids, they are biodegradable, and are known to exhibit slower resistance development rates compared to commercial small-molecule antibiotics due to their more complex modes of action. 5 The microbicidal mechanisms of AMPs vary considerably, comprising nonspecific cell membrane disruption, specific binding to membrane- or cell wall-bound targets, interaction with intracellular targets, or even with multiple targets. 6 The nonspecific membrane interactions are the most commonly described in the literature, though these AMPs usually do not end up as promising drug candidates. 3 In contrast, AMPs with more specific targets, either located on the cell surface or inside the cells, are of immense interest for drug development. Therefore, not only the discovery of structurally new compounds but also studying their mode of action is a crucial part in the development of AMPs as potential new drug candidates. The number of natural AMPs is expected to be in the range of several millions, 7 but to date only 18 000 validated AMPs are reported in public databases, of which even fewer reached clinical trials so far. 8 One possible approach to boost the bioprospection of novel AMPs is the exploration of understudied environments, like the marine niche ( Figure 1 ). Indeed, several microorganisms within the marine environment have demonstrated to be very effective sources of naturally produced AMPs. 9 Their metabolic strategies are adapted to extreme conditions with large temporal and spatial variability. 10 Samplings along the vast Chilean coastline for members of the Actinobacteria phylum have been reported to produce novel bioactive metabolites. 11 − 13 The Comau Fjord, located in Northern Chilean Patagonia, is a suitable environment to explore the diversity and antimicrobial potential of unique marine bacteria, especially those belonging to the Actinomycetota phylum, like Streptomyces , a well-known antibiotic-producer genus. Streptomyces sp. HKF8 harbors a promising metabolic repertoire due to its phenotypic adaptations. To name a few, it was isolated from 15 m-deep marine sediments, requiring seawater for growth, and tolerates high salt concentrations and low temperatures. 12 In previous reports, the genome sequencing of the strain Streptomyces sp. H-KF8 led to the assembly of 11 scaffolds, representing a 7.6 Mbp linear chromosome. 14 With the help of the antiSMASH v3.0 15 26 biosynthetic gene clusters (BGCs) for specialized metabolites were identified, among which 81% represent low similarities to already known BGCs registered in the Minimum Information about a Biosynthetic Gene cluster (MIBiG 16 ) repository. 16 Remarkably, the two nonribosomal peptide synthetases (NRPSs) detected in Streptomyces sp. HKF8’s genome showed a very low similarity to known pathways (mannopeptimycin, 7% similarity to BGC0000388, and streptolydigin, 13% similarity to BGC0001046). 17 These studies uncovered the genetic potential of this strain to produce novel antimicrobial compounds, through NRPS pathways. 12 The NRPSs are responsible for the synthesis of peptides composed of proteinogenic and nonproteinogenic amino acids that can either present a linear or cyclic structure. 18 The latter is of special interest due to the possibility of overcoming structural and protease instability issues. 19 Additionally, NRPS pathways exhibit very complex chemistry in terms of the diversity of their functional groups. 20 , 21 Following the discovery pipeline ( Figure 1 ), the prediction of the core skeleton of novel peptides followed by their chemical synthesis, bioactivity, and structural analysis, as well as mode of action belong to the key steps of characterizing nonribosomal AMPs. In this study, we report the discovery, design, and development of novel naturally inspired AMPs. A series of linear and cyclic antimicrobial peptides, based on genomic data predictions of the marine Actinomycetota Streptomyces sp. H-KF8, were synthesized and characterized with a focus on determination of the influence of amino acid composition on their secondary structure and mode of action, leading to activity against both Gram-positive and Gram-negative bacteria, as well as yeast. Through this journey, bioactive qualified hit structures have been identified among a set of structurally similar peptides, which can lead to potent AMPs in future studies.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsinfecdis.3c00206 . Peptide from Streptomyces sp. H-KF8 (Figures S1, S2 and Table S1), HPLC and mass spectrometry (Table S2 and Figures S3–S10), amino acid analysis (Figure S11 and Table S3), circular dichroism spectroscopy (Figure S12), NMR analysis (Tables S4 and S5), MMC 99 activity including salt stability measurements (Table S6), serum stability RP-HPLC chromatograms (Figure S13), hemolysis (Figure S14), and bacterial cytological profiling of peptides (Figures S15 and S16) ( PDF ) Supplementary Material The authors declare no competing financial interest. Acknowledgments The Knut and Alice Wallenberg Foundation via the Wallenberg Centre for Molecular and Translational Medicine (A.T.), Swedish Research Council (2020-04299) (A.T.), Centre for Antibiotic Resistance Research (CARe) (A.T. and M.W.), Cancerfonden (22 2409) and Fondecyt N° 11121571 (B.C.), and Adlerbertska Stiftelserna (L.B.) are gratefully acknowledged. ESI-FT ICR MS and MALDI-TOF MS/MS analyses were performed at the GIGA Lab from University of Liége, Belgium. We gratefully acknowledge Joseph Martial, Cécile Van de Weerdt, and Edwin de Pauw. The NMR measurements were performed at the Swedish NMR Center, Gothenburg, Sweden. Figures 1 and 3 b are created by using BioRender.com. Abbreviations 4-dimethylaminopyridine N , N -diisopropylethylamine 1-[bis(dimethylamino)methylene]-1 H -1,2,3-triazolo[4,5- b ]pyridinium 3-oxide hexafluorophosphate 2,2,6,6-tetramethylpiperidine N -methylmorpholine (benzotriazol-1-yloxy)tripyrrolidinophosphonium hexafluorophosphate brain heart infusion numerical aperture working distance
CC BY
no
2024-01-16 23:45:29
ACS Infect Dis. 2023 Dec 19; 10(1):79-92
oa_package/92/49/PMC10788856.tar.gz
PMC10788857
0
Introduction Colors are typically used to accentuate, mask, signal, or simply differentiate between different objects. 1 Herein, bright and vivid colors originate from either chromophores or nanosized repetitive structures within the material itself. 1 − 3 The latter depicts the category of “structural colors”. 1 , 4 , 5 Structurally colored materials are able to interfere with, diffract, or scatter light of a specific wavelength in the visible light range (370–700 nm). 1 , 6 These materials have attracted a lot of attention because they do not require dye-based constituents, maintain their vivid and bright color longer, 5 and can be tuned such that they show these colors in a one-, two- or three-dimensional manner. 4 , 7 , 8 Examples of these respective configurations are so-called “Bragg–Stacks” (1D), diffraction gratings (2D), and colloidal photonic crystals (3D). 7 − 9 Due to their versatility, cost-effectiveness, and easy preparation, colloids such as silica, latex, and polystyrene (PS) nanoparticles have been assembled into colloidal photonic crystals that display opal effects. 3 , 10 − 12 If the dimensions and optical properties of the repetitive unit within structural colors are changed in a reversible way, then the materials exhibit stimuli-responsive behavior. In the literature, successful attempts were reported of structurally colored materials that respond to pH, 4 humidity, 1 , 13 , 14 temperature, 4 UV light, 6 mechanical stress, 8 magnetic fields, 15 applied voltages, 16 or the presence of biomolecules. 17 Herein, the nanostructures that were used to induce the required structural changes include etalons, 4 nanovolcanos, 18 hydrogels, 1 , 13 , 14 , 19 thin films, 6 magnetic nanoparticles, 15 catalytically active nanopillars, 16 and polymer brush-grafted nanoparticles. 17 Stimuli-responsive structural colors have been used in vapor sensors due to their nonfatigue properties. 20 , 21 Examples of such materials are humidity and organic vapor sensors based on chameleon-inspired actuators 20 and Bragg reflectors. 19 , 21 , 22 Due to the type of nanostructures used, all of the aforementioned materials possess a fast response and recovery time (<1–10 s). 20 − 22 Fast response times are generally advantageous for sensing applications. 23 However, while fast recovery times are essential for in situ vapor sensing, they pose a limitation for vapor sensing ex situ. Ex situ sensors are typically applied in environments that are hazardous or sterile and where in situ, real-time monitoring by colorimetric sensors is impossible. In the literature, they have been explored for temperature, 24 water activity, 25 and bioprocess monitoring. 26 In recent years, polymer brushes have been heavily investigated for their stimuli-responsive features. 27 − 29 Polymer brushes consist of macromolecular chains anchored by one chain end to a substrate at a sufficiently high density. 30 , 31 Various polymer brush types have successfully displayed stimuli-responsive features for pH, 4 , 32 humid air, 4 , 33 temperature, 4 , 34 , 35 as well as specific solvents and volatile organic compounds (VOCs). 36 − 38 Based on the affinity of the polymer with its surroundings, the polymer chains will stretch away from a surface and swell in height. Herein, the swelling behavior of a polymer brush is typically expressed in the swelling ratio α. 37 Typical swelling ratio values for polymer brushes depict 4.0–4.5 for exposure to good solvents and 1.3–2.2 for their corresponding near-saturated vapors. 36 , 37 , 39 This difference can be explained by chemical potential difference Δμ, which is smaller in a polymer brush-vapor system compared to a polymer brush-liquid system. 29 Additionally, typical response and recovery times are in the order of minutes, 36 which makes polymer brushes suitable for both in situ and ex situ vapor sensing. Polymer brushes have been used in structurally colored materials via Ag-coated nanovolcano arrays 18 and Au-coated etalons. 4 Herein, the polymer brushes were utilized for their responsive behavior toward water vapor, pH, and temperature. 4 , 18 However, both materials require the additional use of precious metals (Ag and Au) and availability of dedicated fabrication technologies such as thermal evaporation 4 , 18 and reactive ion etching (RIE). 18 Polymer brush-grafted nanoparticles have been used as a repetitive unit in structural colors due to the self-arranging, 5 self-healing, 40 and stimuli-responsive 41 properties enabled by the polymer brush coating. Thermo- and magnetically responsive structural colors were successfully fabricated based on poly( N -isopropylacrylamide) (PNIPAM) 41 and poly(methyl methacrylate) (PMMA) 15 brush-grafted nanoparticles, respectively. Solvated polymer brush systems were used in both cases to achieve noticeable structural changes and thus a color transition of the material. To the best of our knowledge, stimuli-responsive structural colors based on polymer brush-grafted nanoparticles have not yet been demonstrated in air. Therefore, it is not known whether the reduced swelling of polymer brush-vapor systems leads to noticeable, reversible color shifts in structurally colored materials. It is also undetermined whether the change in polymer brush geometry from a flat substrate to a high-surface area material affects its response and recovery times with a vapor stimulus. Studying the combination of a nanoporous structure with stimuli-responsive polymer brushes in vapor conditions may reveal different response and recovery behavior compared to conventional vapor-responsive polymer brushes or structural colors, which allow for better in situ or even ex situ vapor sensing. This paper presents a facile method to obtain ethanol vapor-responsive structural colors based on orderly stacked polymer brush-grafted nanoparticles. Various films composed of PNIPAM brush-grafted silica nanoparticles (PNIPAM- g -SiNPs) were exposed to near-saturated ethanol vapor to study their stimuli-responsive behavior. PNIPAM is a polymer that has a satisfactory affinity to ethanol vapor 42 and opens the possibility for future studies on multiresponsive materials due to its thermosensitivity. The novelty of our work is related to the long color recovery times found in such materials, which makes it possible for them to be used for different applications. An illustration of the material design and ethanol vapor-responsive behavior is shown in Figure 1 . The recovery behavior of the PNIPAM- g -SiNP films was determined to test their potential as an ex situ vapor sensor. These films were compared to reference materials with nonfunctionalized (SiNPs) and PMMA brush-grafted silica nanoparticles (PMMA- g -SiNPs), which have a significantly lower affinity to ethanol liquid and vapor. 43 Structural changes of the repetitive unit were monitored before, during, and after exposure to saturated ethanol vapor to mark differences between stacked nanoparticle films with varying thicknesses and surface functionalities.
Method 1 3 × 1 cm 2 silicon substrates were positioned in an upright position inside a snap-cap vial filled with a 1–5 wt % dispersion of PNIPAM- g -SiNPs in ethanol or PMMA- g -SiNPs in DCM. The solvent was allowed to evaporate for 2 days before removing the nanoparticle films from the vial.
Results and Discussion General Material Characteristics The desired polymer brush-grafted SiNPs were successfully synthesized as described in the Experimental Section . The SiNP diameter was measured to be 125.5 ± 3.2 nm by DLS and SEM (see Figure S1 ). PNIPAM brushes of varying dry heights (8–50 nm) were grafted from the synthesized SiNPs, as confirmed by both SEM and TEM imaging (see Table S1 , Figures S1 and S2 ). Further details regarding the FTIR spectra and swelling characteristics of PNIPAM- g -SiNPs and PMMA- g -SiNPs can be found in the Supporting Information, Figure S3 , Tables S1 and S2 . Various thicknesses of stacked PNIPAM- g -SiNP films were obtained via methods 1 and 2. An overview of the material characteristics is shown in Table 1 , which includes the type of core–shell nanoparticle, deposition method, film thickness t , nanoparticle diameter d , and color appearance in ethanol vapor and air. Method 1 yielded relatively thick (approximately 700–1200 nm) materials with a low surface roughness ( Figure 2 a). Method 2 allowed for a more flexible approach, whereby monolayers and bilayers of approximately 150–230 nm in height were obtained for withdrawal speeds of 0.1–0.5 mm/s ( Figure 2 b). Again, the surface appeared to be devoid of protrusions that could negatively affect its optical properties. The nanoparticles within the films were closely packed (see Figure S4 ), which is advantageous for a structural color. The resulting color of the film was dependent on the PNIPAM- g -SiNP diameter and angle of perception, while independent of the film thickness. As provided in Table 1 , PNIPAM- g -SiNP diameters of 223.5 ± 5.4 nm gave rise to green structurally colored films in dry conditions, whereas blue alternatives were obtained with PNIPAM- g -SiNPs of 138.0 ± 2.1 nm or 174.6 ± 3.3 nm. Meanwhile, materials with the same PNIPAM- g -SiNP diameter of 223.5 ± 5.4 nm and different film thicknesses of 783 ± 8 and 1152 ± 12 nm possessed an identical green hue. Reflection spectroscopy revealed that an increase in the average PNIPAM- g -SiNP diameter also corresponded to a red shift in the reflection peak λ max . Figure 3 shows that an increase in average PNIPAM- g -SiNP diameter from 149.9 to 160.1 nm resulted in a reflection peak shift from 478.0 to 497.1 nm, respectively. These results are in line with the theoretical expression of Bragg–Snell’s law, which describes 3D colloidal photonic crystals wherein m is the diffraction order, λ is the reflective wavelength in nm, D is the colloidal spacing in nm, n eff is the effective refractive index, and θ 0 is the angle of incident light in degrees ( ° ). 8 , 9 , 51 For the PNIPAM- g -SiNP sample with λ max = 492.5 nm, eq 1 predicts λ calc to be 507.2 nm, assuming a FCC colloidal packing and measuring angle of 0° (see Figure S5 ). The difference between the theoretical prediction and experimental observation can be attributed to the influence of n eff and D on the calculation of λ max . Both values were estimated based on the results from DLS, SEM, and AFM, and slight alterations in their values can correspond to a ±12 nm difference in λ calc per parameter. This parameter influence study is included in the Supporting Information, Table S3 . Upon establishing that these stacked PNIPAM- g -SiNP films possess structural coloration, we studied their stimuli-responsive behavior in saturated ethanol vapor. Ethanol Vapor-Responsive Material Properties Color shifts at near-saturated and lower ethanol vapor concentrations were observed by exposing our materials to a gentle ethanol vapor flow. The samples were positioned at varying distances from this flow to qualitatively assess the sensitivity to ethanol vapor. A comparison between PNIPAM- g -SiNP (1) and PNIPAM- g -SiNP (2) films is provided in the Supporting Information ( Figure S6 ). The stimuli-responsive behavior of various PNIPAM- g -SiNP films was monitored via a sealed chamber filled with ethanol vapor at near saturation level. By placing the structurally colored samples inside these sealed chambers, color changes could be observed through the transparent glass material. Due to the obstruction of the glass material, the response time of the materials in exposure to ethanol vapor could not be determined quantitatively. However, a clear red-shift color change could be seen from outside the sealed chamber in mere seconds. Despite a fast responsive behavior, our samples were kept in the sealed chamber for ∼1 day to allow for possible polymer brush swelling and equilibration. If the sealed chamber was opened after a few seconds, the samples appeared to be nonequilibrated as the recovery time increased for increasing incubation times. Therefore, the total recovery time could be more precisely determined upon opening the sealed chamber after equilibration, after which the samples were filmed. The average duration of these color transitions was determined by repeated cycles of opening and closing the sealed chamber (>5 repetitions). All color transitions were reversible and consistent throughout the repeat experiment. In addition, the color transitions of a single sample were reproducible over >5 ethanol vapor exposures ( Figure 4 ) and at least 250 days after their fabrication (see Supporting Information, Table S4 ), indicating optical stability. An overview of the total recovery times of different PNIPAM- g -SiNP films and reference materials consisting of PMMA- g -SiNPs and nonfunctionalized SiNPs is shown in Table 1 . Depending on the fabrication method and sample type, a wide variety of blue-shift changes were observed in the recovery phase. Interestingly, multilayered PNIPAM- g -SiNP(1) films were the only material type to possess a color transition between the two phases. First, a subtle shift in color was observed 5.4–6.3 s after opening the sealed chamber (see: delay). The color transition to its original state in air took significantly longer and was determined to be 33.3–36.2 s (see: relaxation). These longer recovery times closely resemble the behavior of hydrogel-based colorimetric sensors, which also report recovery times in the order of minutes. 52 , 53 The delayed recovery phenomenon is more closely investigated with a continuous reflection spectroscopy experiment and is presented in Figure 5 . In this experiment, multiple reflection spectra of the same material were taken over time. The corresponding reflection peaks were extracted and plotted as a function of time, which clearly illustrated the two-phase recovery behavior. Namely, a stagnant period (delay) is followed by a rapid blue shift from 667 to 577 nm, after which the reflection peak value slowly equilibrates to 471 nm (relaxation). Other results from Table 1 reveal that thinner films of PNIPAM- g -SiNPs(2) display a similar delay in recovery of ∼6.3 s, followed by an immediate transition to their original color. Multilayered films of both reference materials also showed a color transition, which occurred much faster than the PNIPAM- g -SiNP films (0.4–1.5 s). The difference in color transitions is visualized in Figure 6 , which shows screen captures of the films in the recovery phase. Screen captures with a time interval of Δ t of 3 s clearly show a different behavior between PNIPAM- g -SiNP films and the reference material. PNIPAM- g -SiNP(1) began to show a slight color change at t = 9 s, indicating the delay time. PNIPAM- g -SiNP(2), on the other hand, completely reverted to its original blue color in air at t = 9 s. For both PMMA- g -SiNP (1) and SiNP (1) materials, the color transition already occurred between t = 0 and t = 3 s. Given the fact that SiNP films will not experience an increase in nanostructural dimensions due to polymer brush swelling and that PMMA polymer brushes have a low affinity to ethanol, 43 the results suggest that there are two effects influencing the stimuli-responsive behavior of stacked nanoparticle films. In the case of PNIPAM- g -SiNP materials, one influencing factor could be the swelling of PNIPAM brushes in ethanol vapor, hereby influencing n eff and D parameters in eq 1 . To test this hypothesis, AFM was used as a tool to monitor the structural dimensions of the nanoparticle films. Figure 7 shows the relative size extracted from PNIPAM- g -SiNP and PMMA- g -SiNP films before, during, and after exposure to near-saturated ethanol vapor. This is referred to as the initial dry (dry,i), ethanol vapor (EtOH), and final dry (dry,f), respectively. From the acquired AFM images, the thickness of film t was compared to its original dry state t dry,i . The relative thickness, t / t dry,i , is an intercomparable measure for dimension changes in the Z -direction regardless of the initial film thickness t dry,i . Likewise, the relative nanoparticle diameter d / d dry,i provides an intercomparable measure for diameter changes in the XY -direction regardless of the d dry,i value. By comparison of the two PNIPAM- g -SiNP materials with a PMMA- g -SiNP reference, the effect of ethanol vapor on the nanoparticle diameter becomes apparent. For the PNIPAM- g -SiNP (1) sample, an increase in the relative film thickness of 2.8 ± 1.5% and relative nanoparticle diameter of 4.0 ± 1.8% was observed. For the PNIPAM- g -SiNP (2) sample, relative dimensional changes of 8.4 ± 3.7 and 12.4 ± 8.0% were observed in ethanol vapor. These results suggest an absolute increase in PNIPAM brush height of ∼5–15 nm, with thicker films of PNIPAM- g -SiNPs showing a smaller increase in the nanoparticle diameter compared to thinner films of PNIPAM- g -SiNPs. We attribute this difference to the high spatial constraints for PNIPAM brush swelling in the multilayered nanoparticle films. From Figure 7 , reversible swelling of the PNIPAM- g -SiNP materials is also evident by the decrease of the relative thickness and nanoparticle diameter after ethanol vapor exposure. Ellipsometry measurements of a PNIPAM brush on a silicon substrate were done in saturated ethanol vapor to determine an indicative swelling ratio for the PNIPAM- g -SiNP materials (see Figure S7 ). With the obtained swelling ratio of α = 2.23 ± 0.13 and PNIPAM brush dry heights of 10–50 nm, the PNIPAM- g -SiNP materials were expected to swell >20 nm in the presence of saturated ethanol vapor. The reduced swelling capability of the PNIPAM- g -SiNP is likely due to spatial constraints in the nanoporous and closely packed films in all three dimensions. As expected, the reference sample PMMA- g -SiNP (1) did not show a significant change during ethanol vapor exposure, with t / t dry,i = 2.1 ± 3.8% and d / d dry,i = −0.3 ± 2.6%. This is in line with the knowledge that PMMA brushes possess lower swelling ratios than PNIPAM brushes in ethanol media. 34 , 41 , 43 With these results, it is established that a part of the stimuli-responsive behavior of stacked PNIPAM- g -SiNP films occurs due to swelling of the polymer brushes. Our measurements were conducted at room temperature (20 °C), which is below the lower-critical solution temperature (LCST) of PNIPAM of 32 °C. 41 This means the polymer brushes can be assumed to be well-solvated 34 and thus susceptible to ethanol vapor uptake. The influence of humidity on swelling of PNIPAM and PMMA brushes is expected to be negligible, as the relative humidity (RH) of ambient air in the laboratory facilities was measured at 30–40 RH%, and PNIPAM films show little swelling (α ∼ 1.02) at 40 RH%. 54 At higher RH% values (>45 RH%), the influence of humidity on the material’s sensitivity to ethanol vapor cannot be neglected. 54 , 55 In those cases, we suggest operating our sensor material above the LCST or developing polymer brush-grafted nanoparticle films with hydrophobic brushes. To the best of our knowledge, the extent of PNIPAM collapse in ethanol–water vapor mixtures above the LCST has not been investigated yet. In other words, the effectiveness of operating above the LCST to achieve high ethanol vapor selectivity toward humid air remains to be investigated. At room temperature, we have now established that the PNIPAM brushes act as an absorber for ethanol vapor by which the material is able to maintain its color state longer. This is in line with the relatively long response/recovery times reported in polymer brush systems with acetone, 36 methanol, and ethanol vapor stimuli. 39 Delaying the color recovery with polymer brush coatings opens the possibility for ex situ sensing with structural colors. The response of all multilayered nanoparticle films in ethanol vapor, regardless of their surface functionality, is thought to be due to condensation of ethanol vapor between the nanoparticles. Even small layers of condensed ethanol liquid will undoubtedly affect the effective refractive index n eff in Bragg–Snell’s law and cause a red-shift color transition to occur. Ethanol is a volatile compound and evaporates easily, which means relaxation of the material to its original color should happen relatively fast and within seconds. Our assumption is consistent with the observations shown in Table 1 , which indicate short relaxation times and no delay period for multilayered, nonfunctionalized SiNP films. The relatively long recovery times of the multilayered PNIPAM- g -SiNP films could be explained by the influence of both ethanol vapor condensation and absorption in the PNIPAM brushes. The additional factor of vapor absorption in the PNIPAM brushes has been proven by AFM experiments and is shown in Figure 7 . While condensed ethanol evaporates quickly, it is known that absorbed volatile compounds take relatively long to leave a polymer brush. 36 Therefore, absorption of ethanol appears to be the reason for the delay period that PNIPAM- g -SiNP films consistently show in the recovery phase, regardless of the film thickness. While the two main contributing factors for vapor-responsive behavior in our structurally colored materials were identified, more research is needed to investigate the relative effects of vapor condensation and absorption in core–shell nanoparticle films. To extend our findings to other vapor-responsive systems, the minimum affinity between polymer chains and volatile analytes to achieve long recovery times must be examined. Next, to validate the sensor sensitivity, we suggest a follow-up study to assess the selectivity of our structurally colored materials with other VOCs. Improvements in this selectivity may be achieved by incorporating block-copolymer brushes or using multiple sensing platforms with different polymer brush-grafted nanoparticles.
Results and Discussion General Material Characteristics The desired polymer brush-grafted SiNPs were successfully synthesized as described in the Experimental Section . The SiNP diameter was measured to be 125.5 ± 3.2 nm by DLS and SEM (see Figure S1 ). PNIPAM brushes of varying dry heights (8–50 nm) were grafted from the synthesized SiNPs, as confirmed by both SEM and TEM imaging (see Table S1 , Figures S1 and S2 ). Further details regarding the FTIR spectra and swelling characteristics of PNIPAM- g -SiNPs and PMMA- g -SiNPs can be found in the Supporting Information, Figure S3 , Tables S1 and S2 . Various thicknesses of stacked PNIPAM- g -SiNP films were obtained via methods 1 and 2. An overview of the material characteristics is shown in Table 1 , which includes the type of core–shell nanoparticle, deposition method, film thickness t , nanoparticle diameter d , and color appearance in ethanol vapor and air. Method 1 yielded relatively thick (approximately 700–1200 nm) materials with a low surface roughness ( Figure 2 a). Method 2 allowed for a more flexible approach, whereby monolayers and bilayers of approximately 150–230 nm in height were obtained for withdrawal speeds of 0.1–0.5 mm/s ( Figure 2 b). Again, the surface appeared to be devoid of protrusions that could negatively affect its optical properties. The nanoparticles within the films were closely packed (see Figure S4 ), which is advantageous for a structural color. The resulting color of the film was dependent on the PNIPAM- g -SiNP diameter and angle of perception, while independent of the film thickness. As provided in Table 1 , PNIPAM- g -SiNP diameters of 223.5 ± 5.4 nm gave rise to green structurally colored films in dry conditions, whereas blue alternatives were obtained with PNIPAM- g -SiNPs of 138.0 ± 2.1 nm or 174.6 ± 3.3 nm. Meanwhile, materials with the same PNIPAM- g -SiNP diameter of 223.5 ± 5.4 nm and different film thicknesses of 783 ± 8 and 1152 ± 12 nm possessed an identical green hue. Reflection spectroscopy revealed that an increase in the average PNIPAM- g -SiNP diameter also corresponded to a red shift in the reflection peak λ max . Figure 3 shows that an increase in average PNIPAM- g -SiNP diameter from 149.9 to 160.1 nm resulted in a reflection peak shift from 478.0 to 497.1 nm, respectively. These results are in line with the theoretical expression of Bragg–Snell’s law, which describes 3D colloidal photonic crystals wherein m is the diffraction order, λ is the reflective wavelength in nm, D is the colloidal spacing in nm, n eff is the effective refractive index, and θ 0 is the angle of incident light in degrees ( ° ). 8 , 9 , 51 For the PNIPAM- g -SiNP sample with λ max = 492.5 nm, eq 1 predicts λ calc to be 507.2 nm, assuming a FCC colloidal packing and measuring angle of 0° (see Figure S5 ). The difference between the theoretical prediction and experimental observation can be attributed to the influence of n eff and D on the calculation of λ max . Both values were estimated based on the results from DLS, SEM, and AFM, and slight alterations in their values can correspond to a ±12 nm difference in λ calc per parameter. This parameter influence study is included in the Supporting Information, Table S3 . Upon establishing that these stacked PNIPAM- g -SiNP films possess structural coloration, we studied their stimuli-responsive behavior in saturated ethanol vapor. Ethanol Vapor-Responsive Material Properties Color shifts at near-saturated and lower ethanol vapor concentrations were observed by exposing our materials to a gentle ethanol vapor flow. The samples were positioned at varying distances from this flow to qualitatively assess the sensitivity to ethanol vapor. A comparison between PNIPAM- g -SiNP (1) and PNIPAM- g -SiNP (2) films is provided in the Supporting Information ( Figure S6 ). The stimuli-responsive behavior of various PNIPAM- g -SiNP films was monitored via a sealed chamber filled with ethanol vapor at near saturation level. By placing the structurally colored samples inside these sealed chambers, color changes could be observed through the transparent glass material. Due to the obstruction of the glass material, the response time of the materials in exposure to ethanol vapor could not be determined quantitatively. However, a clear red-shift color change could be seen from outside the sealed chamber in mere seconds. Despite a fast responsive behavior, our samples were kept in the sealed chamber for ∼1 day to allow for possible polymer brush swelling and equilibration. If the sealed chamber was opened after a few seconds, the samples appeared to be nonequilibrated as the recovery time increased for increasing incubation times. Therefore, the total recovery time could be more precisely determined upon opening the sealed chamber after equilibration, after which the samples were filmed. The average duration of these color transitions was determined by repeated cycles of opening and closing the sealed chamber (>5 repetitions). All color transitions were reversible and consistent throughout the repeat experiment. In addition, the color transitions of a single sample were reproducible over >5 ethanol vapor exposures ( Figure 4 ) and at least 250 days after their fabrication (see Supporting Information, Table S4 ), indicating optical stability. An overview of the total recovery times of different PNIPAM- g -SiNP films and reference materials consisting of PMMA- g -SiNPs and nonfunctionalized SiNPs is shown in Table 1 . Depending on the fabrication method and sample type, a wide variety of blue-shift changes were observed in the recovery phase. Interestingly, multilayered PNIPAM- g -SiNP(1) films were the only material type to possess a color transition between the two phases. First, a subtle shift in color was observed 5.4–6.3 s after opening the sealed chamber (see: delay). The color transition to its original state in air took significantly longer and was determined to be 33.3–36.2 s (see: relaxation). These longer recovery times closely resemble the behavior of hydrogel-based colorimetric sensors, which also report recovery times in the order of minutes. 52 , 53 The delayed recovery phenomenon is more closely investigated with a continuous reflection spectroscopy experiment and is presented in Figure 5 . In this experiment, multiple reflection spectra of the same material were taken over time. The corresponding reflection peaks were extracted and plotted as a function of time, which clearly illustrated the two-phase recovery behavior. Namely, a stagnant period (delay) is followed by a rapid blue shift from 667 to 577 nm, after which the reflection peak value slowly equilibrates to 471 nm (relaxation). Other results from Table 1 reveal that thinner films of PNIPAM- g -SiNPs(2) display a similar delay in recovery of ∼6.3 s, followed by an immediate transition to their original color. Multilayered films of both reference materials also showed a color transition, which occurred much faster than the PNIPAM- g -SiNP films (0.4–1.5 s). The difference in color transitions is visualized in Figure 6 , which shows screen captures of the films in the recovery phase. Screen captures with a time interval of Δ t of 3 s clearly show a different behavior between PNIPAM- g -SiNP films and the reference material. PNIPAM- g -SiNP(1) began to show a slight color change at t = 9 s, indicating the delay time. PNIPAM- g -SiNP(2), on the other hand, completely reverted to its original blue color in air at t = 9 s. For both PMMA- g -SiNP (1) and SiNP (1) materials, the color transition already occurred between t = 0 and t = 3 s. Given the fact that SiNP films will not experience an increase in nanostructural dimensions due to polymer brush swelling and that PMMA polymer brushes have a low affinity to ethanol, 43 the results suggest that there are two effects influencing the stimuli-responsive behavior of stacked nanoparticle films. In the case of PNIPAM- g -SiNP materials, one influencing factor could be the swelling of PNIPAM brushes in ethanol vapor, hereby influencing n eff and D parameters in eq 1 . To test this hypothesis, AFM was used as a tool to monitor the structural dimensions of the nanoparticle films. Figure 7 shows the relative size extracted from PNIPAM- g -SiNP and PMMA- g -SiNP films before, during, and after exposure to near-saturated ethanol vapor. This is referred to as the initial dry (dry,i), ethanol vapor (EtOH), and final dry (dry,f), respectively. From the acquired AFM images, the thickness of film t was compared to its original dry state t dry,i . The relative thickness, t / t dry,i , is an intercomparable measure for dimension changes in the Z -direction regardless of the initial film thickness t dry,i . Likewise, the relative nanoparticle diameter d / d dry,i provides an intercomparable measure for diameter changes in the XY -direction regardless of the d dry,i value. By comparison of the two PNIPAM- g -SiNP materials with a PMMA- g -SiNP reference, the effect of ethanol vapor on the nanoparticle diameter becomes apparent. For the PNIPAM- g -SiNP (1) sample, an increase in the relative film thickness of 2.8 ± 1.5% and relative nanoparticle diameter of 4.0 ± 1.8% was observed. For the PNIPAM- g -SiNP (2) sample, relative dimensional changes of 8.4 ± 3.7 and 12.4 ± 8.0% were observed in ethanol vapor. These results suggest an absolute increase in PNIPAM brush height of ∼5–15 nm, with thicker films of PNIPAM- g -SiNPs showing a smaller increase in the nanoparticle diameter compared to thinner films of PNIPAM- g -SiNPs. We attribute this difference to the high spatial constraints for PNIPAM brush swelling in the multilayered nanoparticle films. From Figure 7 , reversible swelling of the PNIPAM- g -SiNP materials is also evident by the decrease of the relative thickness and nanoparticle diameter after ethanol vapor exposure. Ellipsometry measurements of a PNIPAM brush on a silicon substrate were done in saturated ethanol vapor to determine an indicative swelling ratio for the PNIPAM- g -SiNP materials (see Figure S7 ). With the obtained swelling ratio of α = 2.23 ± 0.13 and PNIPAM brush dry heights of 10–50 nm, the PNIPAM- g -SiNP materials were expected to swell >20 nm in the presence of saturated ethanol vapor. The reduced swelling capability of the PNIPAM- g -SiNP is likely due to spatial constraints in the nanoporous and closely packed films in all three dimensions. As expected, the reference sample PMMA- g -SiNP (1) did not show a significant change during ethanol vapor exposure, with t / t dry,i = 2.1 ± 3.8% and d / d dry,i = −0.3 ± 2.6%. This is in line with the knowledge that PMMA brushes possess lower swelling ratios than PNIPAM brushes in ethanol media. 34 , 41 , 43 With these results, it is established that a part of the stimuli-responsive behavior of stacked PNIPAM- g -SiNP films occurs due to swelling of the polymer brushes. Our measurements were conducted at room temperature (20 °C), which is below the lower-critical solution temperature (LCST) of PNIPAM of 32 °C. 41 This means the polymer brushes can be assumed to be well-solvated 34 and thus susceptible to ethanol vapor uptake. The influence of humidity on swelling of PNIPAM and PMMA brushes is expected to be negligible, as the relative humidity (RH) of ambient air in the laboratory facilities was measured at 30–40 RH%, and PNIPAM films show little swelling (α ∼ 1.02) at 40 RH%. 54 At higher RH% values (>45 RH%), the influence of humidity on the material’s sensitivity to ethanol vapor cannot be neglected. 54 , 55 In those cases, we suggest operating our sensor material above the LCST or developing polymer brush-grafted nanoparticle films with hydrophobic brushes. To the best of our knowledge, the extent of PNIPAM collapse in ethanol–water vapor mixtures above the LCST has not been investigated yet. In other words, the effectiveness of operating above the LCST to achieve high ethanol vapor selectivity toward humid air remains to be investigated. At room temperature, we have now established that the PNIPAM brushes act as an absorber for ethanol vapor by which the material is able to maintain its color state longer. This is in line with the relatively long response/recovery times reported in polymer brush systems with acetone, 36 methanol, and ethanol vapor stimuli. 39 Delaying the color recovery with polymer brush coatings opens the possibility for ex situ sensing with structural colors. The response of all multilayered nanoparticle films in ethanol vapor, regardless of their surface functionality, is thought to be due to condensation of ethanol vapor between the nanoparticles. Even small layers of condensed ethanol liquid will undoubtedly affect the effective refractive index n eff in Bragg–Snell’s law and cause a red-shift color transition to occur. Ethanol is a volatile compound and evaporates easily, which means relaxation of the material to its original color should happen relatively fast and within seconds. Our assumption is consistent with the observations shown in Table 1 , which indicate short relaxation times and no delay period for multilayered, nonfunctionalized SiNP films. The relatively long recovery times of the multilayered PNIPAM- g -SiNP films could be explained by the influence of both ethanol vapor condensation and absorption in the PNIPAM brushes. The additional factor of vapor absorption in the PNIPAM brushes has been proven by AFM experiments and is shown in Figure 7 . While condensed ethanol evaporates quickly, it is known that absorbed volatile compounds take relatively long to leave a polymer brush. 36 Therefore, absorption of ethanol appears to be the reason for the delay period that PNIPAM- g -SiNP films consistently show in the recovery phase, regardless of the film thickness. While the two main contributing factors for vapor-responsive behavior in our structurally colored materials were identified, more research is needed to investigate the relative effects of vapor condensation and absorption in core–shell nanoparticle films. To extend our findings to other vapor-responsive systems, the minimum affinity between polymer chains and volatile analytes to achieve long recovery times must be examined. Next, to validate the sensor sensitivity, we suggest a follow-up study to assess the selectivity of our structurally colored materials with other VOCs. Improvements in this selectivity may be achieved by incorporating block-copolymer brushes or using multiple sensing platforms with different polymer brush-grafted nanoparticles.
Conclusions This article shows a successful method to obtain ethanol vapor-responsive structural colors based on stacked PNIPAM- g -SiNPs. PNIPAM- g -SiNPs films of varying thicknesses and fabrication methods change color reversibly in near-saturated ethanol vapor. Herein, multilayered PNIPAM- g -SiNP films show delayed recovery characteristics, which occur in two phases: a sharply defined blue shift (delay) after 5.4–6.3 s and a gradual blue shift until a total recovery time of 33.3–36.2 s is reached (relaxation). Structural colors with nonfunctionalized SiNPs or PMMA- g -SiNPs also show a red-shift in an ethanol vapor environment but transit back to their original state 0.4–1.5 s after being exposed to air. With AFM, we validate that selective swelling of PNIPAM brushes takes place, which effectively alters the internal structure of the nanoparticle films and thus the structural color. The relatively long recovery times of our PNIPAM- g -SiNP films distinguish them from other vapor-sensitive structural colors and render the material highly suitable for ex situ vapor sensing.
Structural colors are formed by the periodic repetition of nanostructures in a material. Upon reversibly tuning the size or optical properties of the repetitive unit inside a nanostructured material, responsive materials can be made that change color due to external stimuli. This paper presents a simple method to obtain films of ethanol vapor-responsive structural colors based on stacked poly( N -isopropylacrylamide) (PNIPAM)-grafted silica nanoparticles. Our materials show clear, reversible color transitions in the presence of near-saturated ethanol vapor. Moreover, due to the absorption of ethanol in the PNIPAM brushes, relatively long recovery times are observed (∼30 s). Materials based on bare or poly(methyl methacrylate) (PMMA) brush-grafted silica nanoparticles also change color in the presence of ethanol vapor but possess significantly shorter recovery times (∼1 s). Atomic force microscopy reveals that the delayed recovery originates from the ability of PNIPAM brushes to swell in ethanol vapor. This renders the films highly suitable for ex situ ethanol vapor sensing.
Experimental Section Materials Methyl methacrylate (MMA, 99%) was separated from its polymerization inhibitor content by an alumina oxide column. N -Isopropylacrylamide (NIPAM, ≥99%) was purified by heating (40 °C) and recrystallization (0 °C) in toluene. Copper (I) bromide (CuBr) was cleaned with acetic acid and subsequently washed with ethanol prior to use. Copper (II) bromide (CuBr 2 , 99%), (3-aminopropyl) triethoxysilane (APTES, 99%), α-bromoisobutyryl bromide (BiBB, 98%), N , N , N ′, N ′′, N ′′′-penta-methyldiethylenetriamine (PMDETA, 99%), triethylamine (TEA, 99%), tetraethyl orthosilicate (TEOS, ≥99%), hydrogen chloride (HCl, 60%), and ammonia (NH 4 OH, 32%) were obtained from Sigma-Aldrich and used without purification. Milli-Q water was purified from a Milli-Q Advantage A10 purification system (Millipore). Stöber Protocol for 125 nm SiNPs This Stöber protocol is adapted from Yu et al. 44 100 mL of ethanol, 35 mL of Milli-Q water, and 3.25 mL of NH 4 OH were placed in a 250 mL flask and heated to 65 °C. While stirring at 550 rpm, 8.0 mL of TEOS was added at 0.5 mL/s. The reaction mixture was continuously stirred for 1 h. The synthesized SiNPs were separated from the reactants by centrifugation for 30 min at 10,000 rpm (20 °C). Two washing steps in ethanol were done to obtain a stock dispersion of SiNPs in ethanol. Nanoparticle Surface Preparation for SI-ATRP 45 − 47 The SiNP surface functionalization steps, including the SiNP surface preparation and polymerization of PNIPAM and PMMA, are illustrated in the Supporting Information, Scheme S1 . A hydrolysis step was done to maximize the amount of OH-groups on the nanoparticle surfaces. 45 For this reaction, a SiNP dispersion in ethanol/water (v/v ratio 204:6) was prepared. HCl was added dropwise until a solution at pH 1.0 was reached. The reaction mixture was continuously stirred at 500 rpm for 16 h, followed by centrifugation. The hydrolyzed nanoparticles were washed twice with an ethanol solvent. Next, the SiNP surface was modified with APTES, which served as an anchoring layer. A 250 mL round-bottom flask was filled with a 4 g/100 mL dispersion of hydrolyzed SiNPs in ethanol and 2 mL of APTES. This mixture was stirred for 3 h at 700 rpm (20 °C). Subsequently, the APTES-functionalized SiNPs were collected by centrifugation. One washing step was performed with ethanol. We are aware that APTES and other silane anchors can degraft from the substrate upon long-term exposure to organic media, 48 ethanol/water mixtures, 49 or humid vapor. 33 However, the samples described in this paper were freshly prepared and not exposed to solvents for >2 days. For such immersion times, no degrafting of APTES was observed. 33 If longer utilization times are needed, we recommend stabilizing the anchoring layer with a diblock copolymer brush with an additional hydrophobic block or a multivalent bond anchor, e.g., poly(glycidyl methacrylate). 50 The APTES-functionalized SiNPs were solvent-exchanged to dimethylformamide (DMF) by means of centrifugation (30 min, 20 °C, 10,000 rpm). A 4 g/100 mL dispersion in DMF was cooled to 0 °C before adding 3 mL of TEA and 1 mL of BiBB dropwise and simultaneously to the flask. The reaction proceeded for 15 h at 550 rpm (20 °C). BiBB-functionalized nanoparticles were collected by using centrifugation, followed by two washing steps in DMF. Preparation of PNIPAM- g -SiNPs This synthesis route is adapted from Manivannan et al. 41 PNIPAM brush growth via SI-ATRP proceeded with 180 mg of BiBB-functionalized SiNPs, 1.0 g of NIPAM, 124 μL of PMDETA, 20 mg of CuBr, 4 mL of Milli-Q, and 4 mL of methanol. The ATRP reaction flasks were stirred at 500 rpm and purged with nitrogen at 1 mL/min. After initiation of the SI-ATRP reaction, the reaction flask was continuously stirred for 1–3 h to yield different polymer brush thicknesses. The reaction was quenched upon opening the flask and centrifuging the reaction mixture. Two subsequent washing steps were carried out with water and ethanol to remove the catalyst and ligand in solution. Preparation of PMMA- g -SiNPs 47 PMMA brush growth via SI-ATRP proceeded with 1000 mg of BiBB-functionalized SiNPs, 4.0 mL of MMA, 94.6 μL of PMDETA, 0.0456 g of CuBr, 0.0303 g of CuBr 2 , and 43 mL of DMF. The reaction flasks were stirred at 500 rpm and purged with nitrogen at 1 mL/min. After initiation of the SI-ATRP reaction, the reaction flask was continuously stirred for 0.5–2.0 h at 65 °C to yield different polymer brush thicknesses. The reaction was quenched upon opening the flask and centrifuging the reaction mixture. Two subsequent washing steps were carried out with DMF to remove the catalyst and ligand in solution. Preparation of Stacked Nanoparticle Films Method 2 A step motor (DC motor 23.112–050, Maxon) was used to move 1 × 1 cm 2 silicon substrates at a constant withdrawal speed. 5 wt % dispersions of PNIPAM- g -SiNPs and PMMA- g -SiNPs were prepared in ethanol and DCM, respectively. During the dip-coating procedure, silicon substrates were immersed and withdrawn from the dispersion at a speed of 0.10–1.00 mm/s to create nanoparticle films of different thicknesses. Characterization Nanoparticle diameters were determined with scanning electron microscopy (SEM, JSM-6010LA, JEOL) and dynamic light scattering (DLS, Zetasizer Nano-ZS, Malvern Panalytical). To provide solid evidence for a polymer brush layer surrounding the SiNPs, transmission electron microscopy (TEM, Spectra300, Thermo Scientific) and Fourier transform infrared (FTIR) spectroscopy (Alpha II, Bruker) were done as well. The ordered stacking of core–shell nanoparticles leads to the formation of structural colors, which have been characterized by cross-sectional SEM (JSM-6010LA, JEOL), reflection spectroscopy in the wavelength range of 400–700 nm (HR4000, Ocean Insights), and atomic force microscopy (AFM) (Multimode, Bruker). The AFM tapping mode was used with silicon cantilevers (NanoWorld NCH) of radius <8 nm, stiffness ∼42 N/m, and a resonance frequency of 320 kHz. The influence of near-saturated ethanol vapor on the structure of the nanoparticle films was tested by surrounding the AFM device with a 5 L closed chamber. Inside the closed chamber, open vials with a cumulative total of 160 mL of ethanol were placed. The solvent was allowed to evaporate for at least 30 min before the first AFM images were taken. This waiting time was determined to be sufficient, as a stable and equilibrated state was observed after 30 min. The swelling ratio of PNIPAM brushes in exposure to ethanol vapor was determined with ellipsometry measurements (M2000-X, J.A. Woollam Co. Inc.).
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsapm.3c02397 . Additional experimental data regarding the SiNPs, structural colors, and their vapor sensing characteristics ( PDF ) Supplementary Material Author Contributions Esli Diepenbroek: development or design of methodology. Conducting a research process and performing the experiments. Creation of the published work, writing the initial draft. Maria Brió Pérez: oversight and leadership responsibility for the research planning and execution, including mentorship. Preparation of the published work, critical revision–including pre- or post-publication stages. Sissi de Beer: management and leadership responsibility for the research planning and execution. Preparation of the published work, critical revision–including pre- or post-publication stages. The authors declare no competing financial interest. Acknowledgments The authors thank C.J. Padberg for his technical support, J.G. Bomer for his help with the reflection spectroscope setup, the Inorganic Membranes research group (University of Twente) and ir. L.B. Veldscholte for facilitating the ellipsometry swelling experiments, and Prof. Dr. P.W.H. Pinkse and Prof. Dr. J.C.T. Eijkel for their help interpreting the data on structural coloration.
CC BY
no
2024-01-16 23:45:29
ACS Appl Polym Mater. 2023 Dec 14; 6(1):870-878
oa_package/4b/e2/PMC10788857.tar.gz
PMC10788858
0
Introduction Frontal polymerization (FP) is a process wherein an initial stimulus initiates a localized polymerization reaction that propagates through uncured resin, curing the resin as it travels. The process relies on heat diffusion and the Arrhenius rate kinetics of an exothermic reaction. 1 − 4 FP was used with various monomer systems that have different mechanisms of polymerization, such as acrylates that use free-radical polymerization, 5 − 7 dicyclopentadiene using ring-opening metathesis polymerization, 8 − 10 and urethane polymerizations mediated by catalysts. 11 , 12 Radical-induced cationic frontal polymerization (RICFP) allows for FP of epoxies and vinyl ethers through the combination of a thermal radical initiator that promotes decomposition of a superacid-generating salt. 13 − 15 It can be initiated by either heat or light. Typically, iodonium-based salts are used in RICFP as the superacid-generating salts, with hexafluoroantimonate or aluminate counterions being common. Peroxides 14 and benzopinacol, which is a gas-less initiator, 16 are common in RICFP. During RICFP, if thermally initiated, the radical initiator decomposes to produce radicals that reduce the superacid generator, which generates a superacid based on the counterion after decomposition steps. The superacid can then initiate cationic polymerization, and heat from propagation generates radicals from the radical initiator, looping the process. 14 , 17 If initiated by ultraviolet (UV) light, the superacid generator instead decomposes and generates the superacid from excitation by the UV, which then initiates polymerization, and the heat generated by propagation cleaves the radical initiator to promote superacid generator decomposition. 13 A simplified mechanism of the RICFP process is shown in Figure S1 . Additive manufacturing (AM or 3D printing) using FP has been explored and reported in the literature. Frontal ring-opening metathesis polymerization has been used extensively to demonstrate the potential of 3D printing through FP. 18 − 21 The use of FP for 3D printing has advantages in energy consumption and the speed of printing. An ideal process will see that the front is initiated and propagates behind the extruded material very closely so that the resin does not sag, owing to suitable viscosity, and the front is continuous. There are a few reports of RICFP being used for 3D printing. In their first publication on the topic, Zhang et al. 22 investigated printing a formulation containing a commercial bisphenol A diglycidyl ether (BADGE)-based epoxy resin with an iodonium aluminate salt and benzopinacol initiating system. They soaked continuous carbon fiber (CF) tows with the formulation and successfully cured this material frontally while extruding, with an increase in front velocity when the resin was soaked into carbon fibers. A subsequent report by Zhang et al., 23 using the same formulation as above, saw that front velocity increased with the addition of 1 wt % carbon nanotubes (CNTs) and formulations with the CNTs and continuous CF tows resulted in improved mechanical properties. CNTs are fillers with high thermal diffusion and have a much smaller diameter than carbon fibers with a higher specific surface area. 24 Zhang et al. 25 performed a detailed study of the effects of CNTs, graphene oxide, and discontinuous CFs on RICFP for printing. Using the same resin, they found that 1 wt % CNTs or 1 wt % discontinuous CFs gave small increases in front velocity compared to the neat resin, while 1 wt % graphene oxide reduced the front velocity. They also demonstrated the printing of a spiral shape using epoxy resin with CNTs. In the three papers above, a heat bed set to 120 °C was used to initiate the fronts. Gao et al. 26 investigated the frontal curing of a printed highly viscous novolac epoxy resin with iodonium aluminate salt and benzopinacol, through spraying of the initiating system with an atomizer while printing. This was initiated with a 90 °C heat bed, and a dependence of front velocity was seen with layer thickness and atomizer parameters. In this paper, we intend to demonstrate the potential of printing free-standing structures using the frontal polymerization of epoxy-vinyl ether composites. Vinyl ethers were previously shown to increase reactivity when added to epoxy systems. 14 By using a vinyl ether in tandem with two different epoxies, we aim to formulate a reactive system with desirable rheological properties for extrusion-based 3D printing. The effects of resin composition, initiator concentration, filler type, and loading on front kinetics were investigated. Carbon nanofibers (CNFs) and milled carbon fibers (MCFs) were compared as fillers to tune front kinetics, and CNFs were investigated to manipulate mechanical properties, while fumed silica (FS) was used to further tailor viscosity, as FS is inert in RICFP. 14 , 17 CNFs are smaller than carbon fibers and have a higher specific surface area, which could assist in improving physical properties. They are larger than CNTs, less thermally conductive, and differ in their physical structure, being cylinders composed of stacked layers versus hollow CNTs; 24 , 27 CNFs have the advantage of being less expensive than CNTs while still possessing beneficial thermal properties for FP. Printing parameters of the resin were investigated, along with the rheological behavior and mechanical and microscopic analysis of the generated composites. Finally, we demonstrated the ability to print free-standing structures with this filled resin.
Materials and Methods Materials 2,2-Bis(4-glycidyloxyphenyl)propane (bisphenol A diglycidyl ether, BADGE) was obtained from TCI Chemicals (Montgomeryville, PA). 3,4-Epoxycyclohexylmethyl-3,4-epoxycyclohexanecarboxylate (CE) was purchased from Ambeed Inc. (Arlington Heights, IL). Tri(ethylene glycol) divinyl ether (TEGDVE) and 1,1-Bis(tert-butylperoxy)-3,3,5-trimethylcyclohexane (Luperox 231) were purchased from Sigma Aldrich (St. Louis, MO), and p -(octyloxyphenyl)(phenyl)iodonium hexafluoroantimonate (IOC-8) was purchased from Hampford Research Inc. (Stratford, CT). Aerosil 200 FS was obtained from Evonik (Piscataway, NJ). Zoltek PX35 milled carbon fibers (MCFs) were purchased from Zoltek Companies, Inc. (St. Louis, MO), and PR-19-XT-HHT carbon nanofibers (CNFs) were purchased from Pyrograf Inc. (Cedarville, OH). All chemicals were used as received, and their structures are shown below in Figure 1 . Preparation of Formulations and Front Velocity Measurements The resin was comprised of a mixture of 60 wt % BADGE, 20 wt % CE, and 20 wt % TEGDVE. To prepare formulations for frontal polymerization, first, IOC-8 was dissolved in a mixture of BADGE and CE using a heated sonicator at approximately 40 °C. After dissolution, TEGDVE and Luperox 231 were added, and the mixture was stirred for 10 min using a high shear mixer with a propeller. The IOC-8 acts as a superacid generator, while Luperox 231 acts as a thermal radical initiator that produces radicals that induce the decomposition of the IOC-8. For the fillers, FS was added and then the formulation was mixed in a FlackTek speed mixer at 800 rpm for 2 min. Then, either MCFs or CNFs were added, and the formulation was mixed once at 900 rpm for 1 min, followed by two cycles at 1800 rpm for 3 min. For front kinetic measurements, the formulation was loaded into a wooden mold (135 × 20 × 6 mm) lined with wax paper. Polymerization was initiated by contact using a soldering iron heated to 200 °C. The front was observed with a video camera, and the front velocity was calculated from the slope of the front position versus time. Where indicated, the formulation was instead spread onto a piece of plywood covered in wax paper at a thickness of 1.5 mm by using a drawdown bar. The thickness of 1.5 mm was selected because it was closer to the dimension of a printed filament. The degree of cure of frontally polymerized materials was also determined by using a differential scanning calorimeter (DSC Q100, TA Instruments, New Castle, DE). A ramp rate procedure from 0 to 250 °C at 10 °C min –1 was used for both the uncured resin and cured polymer. The degree of cure was calculated by dividing the area of the residual exothermic peak of the cured polymer by the area of the exothermic peak of curing for the uncured resin. Rheological Characterization A parallel plate Discovery Hybrid Rheometer 20 (DHR-20, Waters TA Instruments) was employed to measure shear viscosity for resin formulations containing different FS and CNF weight fractions (0 to 6 wt %). Resin samples were tested at 25 °C with 25 mm diameter parallel plates in shear rate sweep mode from 0.1 to 100 s –1 . This data was used to study the shear-thinning behavior of the polymer and establish which resin formulations would be suitable for free-standing 3D printing at room temperature. The main goal was to achieve a viscosity allowing consistent extrusion through the extruder nozzle while avoiding sagging of the extruded material at the nozzle tip. Morphological Characterization After polymerization, as described in Section 2.2 , samples were manually fractured, and the surfaces were observed by scanning electron microscopy (SEM) to assess the presence of voids and the dispersion state of the carbon nanofibers or milled carbon fibers. The SEM images were taken with a high-performance JSM-6610LV SEM instrument with a voltage of 15 kV. SEM imaging of milled carbon fiber composites was performed with the Thermo Scientific Helios G4 PFIB CXe at a voltage of 5 kV. Before SEM, the fractured samples were spray-coated with gold in a sputter coater (EMS550X) at 25 mA and a vacuum of 1 × 10 –1 mbar for 2 min. The void content of the polymers produced by frontal polymerization was estimated by gravimetric density measurements based on ASTM D2734. Sample volume was limited to less than 2 cm 3 contrary to that written in the standard method, and the theoretical density of the polymer could not be calculated as indicated in the standard method but was instead measured empirically. These limitations arise from the porous nature of the system. A polymer sample with minimal voids to serve as a reference density was cured in an oven at 100 °C. The density of samples without voids and polymer samples made by frontal polymerization was determined by dividing the sample weight by the volume, which was found by measuring each side of the sample. Void content, V , was calculated using eq 1 below, derived from ASTM D2734, where ρ r is the density of the reference with no voids and ρ s is the density of the sample. Samples were cut to size by using a small table saw. Extrusion-Based Additive Manufacturing Two extrusion-based AM setups were considered for this study. A robotic setup with a UR5 manipulator was first employed to assess the feasibility of layer-by-layer 3D printing with the FP resin system and identify the main issues. Cylindrical geometries were printed based on a computer-aided design (CAD) model with an outer diameter of 20 mm, a height of 12 mm, and a wall width of 1.5 mm. The setup and manufacturing process were described in detail elsewhere. 28 , 29 To further investigate the behavior of the FP resin for small-scale, free-standing extrusion-based AM, a desktop 3D printer was then modified and used as the main AM setup in this study ( Figure 2 ). This was achieved by using an Ender-3 V2 3D printer and replacing its heated fused deposition extruding unit with a custom-made syringe holder. The latter was 3D-printed with polylactic acid (PLA) and designed to hold a 10 mL syringe as an extruder. To obtain material flow, a tubing system was employed, connecting the syringe to a pressure controller. For regulating the pressure, a syringe dispenser (LOCTITE digital syringe dispenser, Henkel, Rocky Hill, CT) was utilized, able to adjust pressure within a range of 0 to 0.7 MPa (0 to 7 bar). A nozzle was attached at the tip of the syringe, and its inner diameter measured 2 mm, unless specified otherwise. The printing platform maintained an average temperature of 25 °C. The process involved depositing the FP resin onto the platform, followed by quick thermal initiation. Two initiation methods were compared: (1) soldering iron heated to 200 °C and (2) two SkyBeam UV spotlights at 100% intensity (10 W, 365 nm wavelength, 6 mm lens, 5.6 W cm –2 at a distance of 13 mm, UVitron International, West Springfield, MA). The modified 3D printing setup was used to study the extrusion behavior of the FP resin for different filler weight fractions (FS and CNFs) under different pressures (0.02 to 0.15 MPa), nozzle diameters (1.5 and 2.0 mm), and printing speeds (1.5 to 6 cm min –1 or 0.25 to 1.0 mm s –1 ). Videos of the extrusion at the nozzle were captured, and deposited filament width and thickness were measured with a caliper to find a suitable set of parameters based on resin formulation. The main goal was to find filler weight fraction, pressure, diameter, and printing speed combinations to achieve consistent material extrusion while avoiding material sagging at the nozzle exit to enable free-standing printing. Once a formulation was selected, planar specimens were printed for mechanical characterization ( Section 2.6 ). Free-standing printing was then demonstrated with single filaments printed at an angle (40°) and with helical geometries. Sample geometries were modeled in SolidWorks, then imported in UltiMaker Cura 5.4 (Netherlands) as .stl files and saved as .gcode files for the printing process. For free-standing printing, gcode files were manually modified to produce single paths. Mechanical Performance Characterization Tensile tests were performed with a 50 kN test machine (TestResources 313) on molded specimens for different FP resin formulations to assess the effect of the filler content (FS and carbon nanofibers). Dogbone specimens were molded with a 3-part acrylic mold based on ASTM D638 Type I geometry, as shown in Figure 3 a. It was coated with a release agent; then the resin was poured into the dogbone mold and pressed with a top acrylic plate, and the reaction was started with a soldering iron at a temperature of 200 °C at one end of the sample. The specimens were lightly sanded before testing to remove sharp edges and surface defects. For tensile testing, the specimens were clamped with hydraulic grips, and an extensometer (E3442, 50.8 mm gage, Epsilon Technology Corp., Jackson, WY) was positioned on each sample to acquire displacement data under a loading rate of 1.3 mm min –1 ( Figure 3 c). Each experiment was carried out on six to eight molded specimens ( n = 6 to 8) for each resin formulation. Ultimate strength, elastic modulus, and strain at break were obtained from the stress–strain curves as well as their corresponding standard deviations. To remove any outliers, Chauvenet’s Criterion was used when analyzing all data. To compare mechanical performance between molded and 3D-printed specimens, a rectangular three-point bending (3PB) geometry was employed based on ASTM D790. It allowed 3D printing of specimens in the longitudinal and transverse directions (shown in Figure 3 d) to evaluate the effect of filament orientation on mechanical performance under bending. Rectangular specimens were molded with a three-part acrylic mold as shown in Figure 3 b. It was coated with a release agent, then the resin was poured into the mold, pressed with a top acrylic plate, and the reaction was started with a soldering iron at a temperature of 200 °C at one end of the sample. Rectangular specimens had a base length of 65 mm, a width of 12.5 mm, and a thickness of 5 mm. Both molded and 3D-printed specimens were lightly sanded before testing to remove sharp edges or surface defects. A TestResources 50 kN test machine, equipped with a 3PB fixture, was employed for flexural loading at a rate of 1.3 mm min –1 until failure. The supports were placed symmetrically beneath the rectangular specimens with a span of 47 mm. During testing, load-displacement curves were acquired, and the flexural strain (ε) and stress (σ) were calculated with eqs 2 and 3 , respectively: where D is the cross-head displacement (mm), d is the specimen’s thickness (mm), L is the span length (mm), P is the applied load (N), and b is the specimen’s width (mm). Each experiment was carried out on six to eight specimens ( n = 6 to 8) for each molded and printed geometry. To remove any outliers, the Chauvenet’s Criterion was used when analyzing all data.
Results and Discussion Comparison of Composite Formulations Front Velocity The resin formulation was chosen as it resulted in rigid BADGE-based polymers, with dilution of the viscous BADGE to a workable viscosity by CE and TEGDVE, as they are both reactive monomers that increase front velocity compared to a formulation containing only BADGE. 14 , 30 Since the printing process is extrusion-based and extrusion speed may be limited by how fast the front can propagate, especially for free-standing printing, maximizing the front velocity while maintaining desirable physical properties is optimal. It was first found that the 3:1 BADGE:TEGDVE (wt:wt) with either 1 phr IOC-8 or 2 phr IOC-8 and 1 phr Luperox 231 formulations would support fronts in layers as thick as the wooden mold described in Section 2.2 , and produce rigid and strong polymers. However, at thinner diameters closer to the printing diameter, the front would be quenched. Increasing the TEGDVE to a 1:1 BADGE:TEGDVE (wt:wt) ratio with 2 phr IOC-8 and 1 phr Luperox 231 solved the reactivity issue, but the rigidity of the polymer appeared to decrease. This is likely due to the differences in structure of the TEGDVE versus BADGE, where the former is a structurally linear monomer with ether linkages that facilitate bending compared to the aromatic rings of BADGE, which provide rigidity. Polymers containing TEGDVE with epoxy and produced with RICFP have been previously reported to be flexible. 14 , 17 To maintain reactivity but improve the rigidity of the polymer, CE was added, which contains two cyclohexane rings for increased rigidity unlike TEGDVE. The final resin composition of 3:1:1 BADGE:CE:TEGDVE (wt:wt:wt) with 1 phr IOC-8 and 1 phr Luperox 231 was chosen as the optimal balance of reactivity and physical properties while supporting a front at the printing diameter to allow for the front-driven printing. In the literature, systems of pure BADGE, 1 mol % IOC-8, and 1 mol % benzopinacol have been found to have a front velocity of 2.7 cm min –1 . 16 Notably, these systems are also found to not support fronts below 1 mol % IOC-8. The front velocities we report here by adding TEGDVE and CE to BADGE are higher than these literature results and support fronts at lower concentrations of IOC-8; only 0.44 mol % IOC-8 (equivalent to 1 phr IOC-8) was needed to support a front with the 3:1:1 BADGE:CE:TEGDVE system. When comparing systems containing reactive diluents like the TEGDVE and CE, front velocities are comparable to the literature where the velocity ranges from 4.6 to 4.8 cm min –1 with CE and 1,4-butanediol diglycidyl ether added to BADGE with an IOC-8 and benzopinacol initiating system. 30 The lower minimum IOC-8 concentration is advantageous for lessened material requirements, and the greater velocity of the resin with TEGDVE and CE added is beneficial to the print speed. Carbon-based fillers were assessed as a means of increasing viscosity so that extrusion could continue without sagging of the material at the nozzle tip while also affecting the front kinetics and allowing for a faster printing process. An increasing amount of both FS and CNFs from 2 to 4 wt % added to the resin was found to result in an increase in front velocity, as shown in Figure 4 . With milled carbon fibers, however, the front velocity only increased a small amount at the highest loading studied of 7 wt % FS and 4 wt % MCF. The nonequivalent loadings of FS and carbon fiber are a result of qualitatively matching the viscosity of the FS and carbon nanofiber resin. The increasing front velocity was not unexpected based on previous publications regarding the addition of conductive elements to frontally polymerized resins. 7 , 17 , 31 , 32 However, the addition of conductive fillers, such as carbon nanofibers or carbon fibers, aids in heat diffusion. Testing the resin with only FS and no carbon fillers resulted in a lower front velocity of 4.0 cm min –1 , indicating that the addition of carbon filler is aiding in heat diffusion and subsequent increase of front velocity. The degree of cure for 4 wt % FS and 4 wt % CNF or 7 wt % FS and 4 wt % MCF samples was also assessed by differential scanning calorimetry, where a high average degree of cure of 98.5% was found for the CNF sample. For the MCF sample, a degree of cure of 99.2% was found. A representative curve from each sample is shown in Figures S2 and S4 in Supporting Information. Detailed curves of the cured CNF and MCF samples are presented in Figures S3 and S5 , respectively. For milled carbon fibers, there are possible factors that may cause the much smaller increase in front velocity compared with carbon nanofibers. First, the higher surface area of carbon nanofibers could impact the front velocity more, due to interactions with the system. A higher loading of either conductive element is likely to increase front velocity up to a maximum loading when the front velocity suffers due to heat loss to the excess filler. Previous additions of conductive fillers to FP resins in the literature used approximately 30 wt % milled carbon fiber to result in an increase in front velocity for RICFP systems, 17 or 49 wt % milled carbon fiber for free-radical acrylate FP. 7 Both of these previous literature reports use a much higher mass of carbon fillers than that of the printing formulations shown here. Studies of other carbon fiber composites have used plies of woven carbon fibers to witness an increase in front velocity due to thermal conductivity. 33 , 34 The thermal conductivity of milled carbon fibers is lower, 6.4 W m –1 K –1 , 7 than carbon nanofibers, 1950 W m –1 K –1 . 24 It is unlikely that the FS addition is affecting the front velocity as previous reports show front kinetics are unaffected above a critical minimum viscosity to overcome convective effects. 17 , 35 , 36 It was also found that front velocity would increase with filler loading at a 1.5 mm thick layer, as shown in Figure 4 b,d. The wooden mold used in the previous experiments has a thickness of 6 mm, which is much thicker than the actual diameter of the resin when it is extruded from the printer. Thin layers in frontal polymerization suffer from higher heat loss than thicker samples due to the increased surface area to volume ratio, which can typically quench or slow fronts. 1 , 2 , 35 Surprisingly, the front velocity was slightly higher at 1.5 mm thick layers than at the 6 mm thick layers as filler content increased, especially for the most filled systems. The trend occurred for both milled carbon fiber and carbon nanofibers. This is contrary to the previous reports in the literature, which would indicate that the front velocity is lower in thinner layers due to an increase in surface area to volume ratio that results in heat loss in the system. 35 , 37 , 38 The cause of this anomalous result cannot be explained by previous reports and requires further investigation. Rheological Behavior A study of the rheological properties of the printing resin containing different loadings of FS and carbon nanofiber was also performed. An expected increase in viscosity was seen with increasing FS and carbon nanofibers in the viscosity profiles presented in Figure 5 . The viscosity decreased with increasing shear rate, indicating that the filled resin possesses shear-thinning behavior, which is beneficial to extrusion-based 3D printing. Similar FS-filled epoxy resins meant for frontal polymerization were found to exhibit shear-thinning behavior, where viscosity decreased with an increase in shear strain or shear rate. 17 The unfilled printing resin did not appear to exhibit the same behavior. Instead, its viscosity remained relatively constant when the shear rate increased. Outcomes from previous work on extrusion-based AM of thermosets suggested that the viscosity range obtained for resin formulations containing at least 2 wt % FS and 2 wt % CNF could be high enough to maintain filament dimensional stability after deposition. 29 , 39 Effect of Initiator Concentrations A study of the effects in IOC-8 and Luperox 231 concentrations in the printing resin was conducted. It is well documented that the increase of the superacid generator and radical initiator in RICFP systems results in an increase of front velocity. 38 , 40 , 41 It has also been shown that RICFP resins likely have a greater dependence on the IOC-8 concentration than Luperox 231 concentration. 14 In Figure S6 , it is seen that the front velocity increases with either IOC-8 or Luperox 231 concentration increases from 1 to 5 phr for the printing resin consisting of 60 wt % BADGE, 20 wt % CE, and 20 wt % TEGDVE. Like the studies of increasing filler content, carbon nanofiber formulations have higher front velocities. The greater dependency of the system on the IOC-8 concentration previously observed is not clearly seen here. Looking at the milled carbon fiber system, there is a higher front velocity at 5 phr IOC-8 and 1 or 3 phr Luperox 231 than at the opposite concentrations, 1 or 3 phr IOC-8 and 5 phr Luperox 231. For the carbon nanofiber system, front velocity with 3 or 5 phr IOC-8 and 1 phr Luperox 231 is higher than 3 or 5 phr Luperox 231 and 1 phr IOC-8. However, the same trend does not continue for a comparison of 5 phr IOC-8 and 3 phr Luperox 231 versus 5 phr Luperox 231 and 3 phr IOC-8; though, the values do appear to lie within the calculated error. Overall, there is an indication of a slightly greater dependence on the IOC-8 concentration, though not as clear as previous reports. Extrusion-Based Additive Manufacturing To enable free-standing printing, the resin must be viscous enough to limit sagging as it is extruded, and the nozzle speed must be coordinated with the front speed to obtain solidification close to the tip while avoiding clogging. For the extrusion and printing studies, resin formulations containing between 2 and 4 wt % FS and CNF were investigated because they possessed suitable viscosity. Viscosity is a critical parameter in 3D printing, influencing the flow behavior of the resin during extrusion and the layering process. As observed in Figure 5 , adding fillers to the resin affected its rheological properties. During initial extrusion-based AM trials, it was experimentally observed that beyond a certain concentration (>4 wt % FS and 4 wt % CNF), the high resin viscosity made it challenging to extrude and print the material with consistent flow (above approximately 288 to 5.2 Pa·s viscosity, from 1 to 100 s –1 shear rate). This resulted in issues such as nozzle clogging, uneven layer deposition, and poor print quality. Therefore, a range of filler weight fractions from 2 to 4 wt % was chosen for the parametric studies. Carbon nanofibers were selected over milled carbon fibers for 3D printing because they increased front velocity more, which is preferable to lower the overall manufacturing time. In addition, less FS was needed to reach a suitable viscosity than formulations with MCF. Parametric Study on Composite Formulations Figure 6 shows a summary of preliminary extrusion experiments to find suitable pressure and printing speed ranges for extrusion of resins containing FS and CNF. Figure 6 a compares extruded filaments under pressures ranging from 0.02 to 0.15 MPa for two nozzle diameters and two resin formulations. The filament exhibited smoother and more consistent behavior as the pressure and nozzle diameter increased. While the extrusion was consistent for the lowest FS and CNF weight fractions (2 wt %), it was observed that the viscosity was too low to ensure the filament would retain its shape after extrusion and deposition (a range from approximately 1.7 to 63 Pa·s, depending on shear rate, as indicated in Figure 5 ). This was noted for 3 wt % formulations as well, confirming that 4 wt % would be the most suitable for free-standing 3D printing. From the rheological measurements described in Section 3.1.2 , the 4 wt % formulation corresponds to a viscosity range from 5.2 to 288 Pa·s at a shear rate of 100 to 1 s –1 . Assuming pipe flow in the nozzle and a flow rate consistent with the printing speed (1.5 to 6 cm min –1 ), the actual shear rate at the nozzle is estimated between 0.13 and 0.5 s –1 . 29 By fitting the data points presented in Figure 5 with a power law function, we can extrapolate the viscosity at the actual shear rates in the printing process between approximately 1790 and 540 Pa·s. The 4 wt % resin formulation was then used to extrude single filaments under 0.02 MPa pressure at different printing speeds (from 1.5 to 6 cm min –1 , a value close to the front velocity) to identify a suitable range for producing filaments with consistent width and thickness while matching the front velocity of the resin system. The results are presented in Figure 6 b, confirming that filament width and thickness decreased with a low standard deviation as the speed increased up to 6 cm min –1 . This indicates that using the 4 wt % resin formulation, in combination with a nozzle diameter of 2 mm and a pressure of 0.02 MPa, could be suitable for free-standing printing as it would be possible to coordinate front and printing speeds. Using a higher pressure would require a higher printing speed to maintain consistent filament extrusion but could exceed the front velocity, leading to filament sagging and unsuccessful free-standing printing. Mechanical Characterization of Composite Specimens As discussed in Section 3.1.1 , CE was added to the TEGDVE-epoxy formulation to improve rigidity while maintaining reactivity and supporting the front propagation. The effect of FS and CNF weight fractions on the tensile properties of the resin system, for as-molded specimens, is shown in Figure 7 a. Representative stress–strain curves for the tensile tests are shown in Figure S7 . The average ultimate strength increased with filler weight fraction, indicating effective reinforcement of the specimens. Similarly, the elastic modulus showed a slight increase as the filler loading increased from 2 to 4 wt %. The strain at break confirmed the rigid, brittle behavior of the resin system at all weight fractions. However, the large standard deviations for strain at break values imply that the difference between weight fractions is not significant. The variations between specimens were likely caused by the porous nature of the resin system after frontal polymerization. Void formation in RICFP systems has been previously reported and is caused by the decomposition of the initiators that produce gas. 40 Representative SEM images of the MCF composites ( Figure 8 ) show details of the composite morphology, with the presence of several voids ( Figure 8 c,d). This morphology was observed for both milled carbon fiber and carbon nanofiber composites. SEM images in Figure 9 show a generally uniform CNF dispersion without large agglomerates. Voids were present on the surface of the specimens as well, potentially creating damage initiation sites. Overall, it is expected that variations in tensile properties mostly depend on the presence of voids. In the literature, tensile properties of RICFP-cured epoxies without and with different fillers (woven carbon fiber plies, 34 multiwalled carbon nanotubes (MWCNTs), 23 or continuous carbon fibers 22 ) were reported. Printed composite specimens containing 1 wt % MWCNT exhibited tensile strength in the same order of magnitude as formulations in Figure 7 a. 22 Reinforcing material and architecture (woven, continuous carbon fibers) significantly increased elastic modulus and tensile strength, 22 , 23 , 34 but generally, comparable or higher front velocities were obtained with our 4 wt % formulation (above 5 cm min –1 ). The influence of the manufacturing approach on the flexural behavior of composite specimens was assessed through 3PB. Figure 7 b compares the flexural strength of molded specimens and 3D-printed specimens in the longitudinal and transverse directions. It shows that molded specimens possessed the highest strength, followed by 3D-printed specimens in the longitudinal and then transverse directions. It is expected that the molded specimens displayed the highest strength because the fabrication process involved the compression of the specimen between two acrylic sheets. This created more even front propagation and surfaces compared to 3D-printed specimens, for which the bottom surface in contact with the printing bed exhibited a more porous morphology. The reduced strength of 3D-printed specimens may be further attributed to adhesion issues between adjacent filaments and possible defects introduced by voids, especially for those printed in the transverse direction. The lowest strength in the transverse direction could also be explained by preferential CNF alignment along the extruded filaments, leading to lower flexural properties compared to the longitudinal direction, as suggested in the literature for different frontal polymerization systems. 20 The void content of the polymers produced by frontal polymerization was estimated using gravimetric density measurements based on ASTM D2734, with limitations of sample size and a slight modification of the provided equation to replace the theoretical composite density with a measured composite density of a minimal void sample. Comparing densities of oven-cured samples with minimal voids to the densities of samples produced by frontal polymerization with many voids, it was found that samples with 4 wt % CNF and 4 wt % FS had an average 41 ± 4.1% void content, while samples with 4 wt % MCF and 7 wt % FS had an average 58 ± 5.9% void content. Both results indicate that there is a high number of voids in the polymer, which can explain some of the variability in the measured mechanical performance. There was some difficulty producing a reference sample to obtain the density without voids present. Future investigations of the void content and methods to reduce it are recommended to use microcomputed tomography for porosity analysis. Voids could be reduced by further optimization of initiator concentration in these systems since the decomposition of the initiators is the biggest contributor to the issue. Another potential solution could be the introduction of fillers to act as nucleation sites for bubbles and generate a more uniform void content. Benzopinacol could also be used as a thermal radical initiator, which has been shown to produce no gas, 16 but can be difficult to dissolve in these epoxy-vinyl ether resins, requiring solvents that can negatively impact front propagation. 23 , 25 Luperox 231 does not have this issue of solubility as it is a liquid peroxide. Free-Standing Additive Manufacturing Demonstration Figure 10 shows different 3D printing geometries to assess the feasibility of printing approaches. Layer-by-layer cylindrical geometries were first 3D-printed with a robotic AM system ( Figure 10 a) to assess the main issues. Initially, coordination between the front velocity and extruder speed was attempted by selecting a printing speed of 6 cm min –1 . However, as layers were deposited on top of existing layers, which remained at high temperature, the front propagated through the syringe tip, clogging the extruder. To avoid this issue, a higher printing speed between 40 and 45 cm min –1 was used, which allowed successful fabrication of the planar specimen. For free-standing printing, the main challenge was to achieve solidification of the filament as it is extruded by maintaining an adequate distance between the nozzle and front. A print speed that closely matches the front velocity is optimal for this process. Several trials were performed to study the effect of reaction initiation with a soldering iron and the printing speed. Single filaments were extruded at a 40° angle at printing speeds between 5.0 and 5.5 cm min –1 , as shown in Figure 10 e. Figure 10 e-i,ii compares different reaction initiation delays at the same printing speed. A longer reaction initiation delay, where the distance between the nozzle and the front was above 10 mm, led to a specimen geometry and angle significantly deviating from the planned path at 40°. A printing speed of 5.2 cm min –1 ( Figure 10 e-iii,iv) with a reaction initiation delay between approximately 5 and 7 mm allowed free-standing printing of filaments with angles between 38° ± 2° and 40° ± 2°. However, as the reaction initiation method required contact with the filament upon extrusion, the base of the filaments was inconsistent. Finally, a higher printing speed of 5.5 cm min –1 , along with a reaction initiation delay lower than 5 mm, showed a curved filament shape. This indicates the reaction was initially well-coordinated with the nozzle, but the distance between the nozzle and front increased over time due to the printing speed. From those initial trials, a printing speed of 5.2 cm min –1 was selected, and the reaction initiation method was improved by using two UV spotlights to eliminate physical contact. Helical geometries were then manufactured to demonstrate the feasibility of free-standing 3D printing with the proposed resin system ( Figure 10 b–d). Future work should address void formation during frontal polymerization to create structures with higher dimensional stability. Depending on the printed geometry, print speed must be adjusted to prevent front propagation between layers, which would lead to failure of the print. This is shown by the significant differences between the print speeds of free-standing helical ( Figure 10 b–d) and layered cylindrical ( Figure 10 a) structures.
Results and Discussion Comparison of Composite Formulations Front Velocity The resin formulation was chosen as it resulted in rigid BADGE-based polymers, with dilution of the viscous BADGE to a workable viscosity by CE and TEGDVE, as they are both reactive monomers that increase front velocity compared to a formulation containing only BADGE. 14 , 30 Since the printing process is extrusion-based and extrusion speed may be limited by how fast the front can propagate, especially for free-standing printing, maximizing the front velocity while maintaining desirable physical properties is optimal. It was first found that the 3:1 BADGE:TEGDVE (wt:wt) with either 1 phr IOC-8 or 2 phr IOC-8 and 1 phr Luperox 231 formulations would support fronts in layers as thick as the wooden mold described in Section 2.2 , and produce rigid and strong polymers. However, at thinner diameters closer to the printing diameter, the front would be quenched. Increasing the TEGDVE to a 1:1 BADGE:TEGDVE (wt:wt) ratio with 2 phr IOC-8 and 1 phr Luperox 231 solved the reactivity issue, but the rigidity of the polymer appeared to decrease. This is likely due to the differences in structure of the TEGDVE versus BADGE, where the former is a structurally linear monomer with ether linkages that facilitate bending compared to the aromatic rings of BADGE, which provide rigidity. Polymers containing TEGDVE with epoxy and produced with RICFP have been previously reported to be flexible. 14 , 17 To maintain reactivity but improve the rigidity of the polymer, CE was added, which contains two cyclohexane rings for increased rigidity unlike TEGDVE. The final resin composition of 3:1:1 BADGE:CE:TEGDVE (wt:wt:wt) with 1 phr IOC-8 and 1 phr Luperox 231 was chosen as the optimal balance of reactivity and physical properties while supporting a front at the printing diameter to allow for the front-driven printing. In the literature, systems of pure BADGE, 1 mol % IOC-8, and 1 mol % benzopinacol have been found to have a front velocity of 2.7 cm min –1 . 16 Notably, these systems are also found to not support fronts below 1 mol % IOC-8. The front velocities we report here by adding TEGDVE and CE to BADGE are higher than these literature results and support fronts at lower concentrations of IOC-8; only 0.44 mol % IOC-8 (equivalent to 1 phr IOC-8) was needed to support a front with the 3:1:1 BADGE:CE:TEGDVE system. When comparing systems containing reactive diluents like the TEGDVE and CE, front velocities are comparable to the literature where the velocity ranges from 4.6 to 4.8 cm min –1 with CE and 1,4-butanediol diglycidyl ether added to BADGE with an IOC-8 and benzopinacol initiating system. 30 The lower minimum IOC-8 concentration is advantageous for lessened material requirements, and the greater velocity of the resin with TEGDVE and CE added is beneficial to the print speed. Carbon-based fillers were assessed as a means of increasing viscosity so that extrusion could continue without sagging of the material at the nozzle tip while also affecting the front kinetics and allowing for a faster printing process. An increasing amount of both FS and CNFs from 2 to 4 wt % added to the resin was found to result in an increase in front velocity, as shown in Figure 4 . With milled carbon fibers, however, the front velocity only increased a small amount at the highest loading studied of 7 wt % FS and 4 wt % MCF. The nonequivalent loadings of FS and carbon fiber are a result of qualitatively matching the viscosity of the FS and carbon nanofiber resin. The increasing front velocity was not unexpected based on previous publications regarding the addition of conductive elements to frontally polymerized resins. 7 , 17 , 31 , 32 However, the addition of conductive fillers, such as carbon nanofibers or carbon fibers, aids in heat diffusion. Testing the resin with only FS and no carbon fillers resulted in a lower front velocity of 4.0 cm min –1 , indicating that the addition of carbon filler is aiding in heat diffusion and subsequent increase of front velocity. The degree of cure for 4 wt % FS and 4 wt % CNF or 7 wt % FS and 4 wt % MCF samples was also assessed by differential scanning calorimetry, where a high average degree of cure of 98.5% was found for the CNF sample. For the MCF sample, a degree of cure of 99.2% was found. A representative curve from each sample is shown in Figures S2 and S4 in Supporting Information. Detailed curves of the cured CNF and MCF samples are presented in Figures S3 and S5 , respectively. For milled carbon fibers, there are possible factors that may cause the much smaller increase in front velocity compared with carbon nanofibers. First, the higher surface area of carbon nanofibers could impact the front velocity more, due to interactions with the system. A higher loading of either conductive element is likely to increase front velocity up to a maximum loading when the front velocity suffers due to heat loss to the excess filler. Previous additions of conductive fillers to FP resins in the literature used approximately 30 wt % milled carbon fiber to result in an increase in front velocity for RICFP systems, 17 or 49 wt % milled carbon fiber for free-radical acrylate FP. 7 Both of these previous literature reports use a much higher mass of carbon fillers than that of the printing formulations shown here. Studies of other carbon fiber composites have used plies of woven carbon fibers to witness an increase in front velocity due to thermal conductivity. 33 , 34 The thermal conductivity of milled carbon fibers is lower, 6.4 W m –1 K –1 , 7 than carbon nanofibers, 1950 W m –1 K –1 . 24 It is unlikely that the FS addition is affecting the front velocity as previous reports show front kinetics are unaffected above a critical minimum viscosity to overcome convective effects. 17 , 35 , 36 It was also found that front velocity would increase with filler loading at a 1.5 mm thick layer, as shown in Figure 4 b,d. The wooden mold used in the previous experiments has a thickness of 6 mm, which is much thicker than the actual diameter of the resin when it is extruded from the printer. Thin layers in frontal polymerization suffer from higher heat loss than thicker samples due to the increased surface area to volume ratio, which can typically quench or slow fronts. 1 , 2 , 35 Surprisingly, the front velocity was slightly higher at 1.5 mm thick layers than at the 6 mm thick layers as filler content increased, especially for the most filled systems. The trend occurred for both milled carbon fiber and carbon nanofibers. This is contrary to the previous reports in the literature, which would indicate that the front velocity is lower in thinner layers due to an increase in surface area to volume ratio that results in heat loss in the system. 35 , 37 , 38 The cause of this anomalous result cannot be explained by previous reports and requires further investigation. Rheological Behavior A study of the rheological properties of the printing resin containing different loadings of FS and carbon nanofiber was also performed. An expected increase in viscosity was seen with increasing FS and carbon nanofibers in the viscosity profiles presented in Figure 5 . The viscosity decreased with increasing shear rate, indicating that the filled resin possesses shear-thinning behavior, which is beneficial to extrusion-based 3D printing. Similar FS-filled epoxy resins meant for frontal polymerization were found to exhibit shear-thinning behavior, where viscosity decreased with an increase in shear strain or shear rate. 17 The unfilled printing resin did not appear to exhibit the same behavior. Instead, its viscosity remained relatively constant when the shear rate increased. Outcomes from previous work on extrusion-based AM of thermosets suggested that the viscosity range obtained for resin formulations containing at least 2 wt % FS and 2 wt % CNF could be high enough to maintain filament dimensional stability after deposition. 29 , 39 Effect of Initiator Concentrations A study of the effects in IOC-8 and Luperox 231 concentrations in the printing resin was conducted. It is well documented that the increase of the superacid generator and radical initiator in RICFP systems results in an increase of front velocity. 38 , 40 , 41 It has also been shown that RICFP resins likely have a greater dependence on the IOC-8 concentration than Luperox 231 concentration. 14 In Figure S6 , it is seen that the front velocity increases with either IOC-8 or Luperox 231 concentration increases from 1 to 5 phr for the printing resin consisting of 60 wt % BADGE, 20 wt % CE, and 20 wt % TEGDVE. Like the studies of increasing filler content, carbon nanofiber formulations have higher front velocities. The greater dependency of the system on the IOC-8 concentration previously observed is not clearly seen here. Looking at the milled carbon fiber system, there is a higher front velocity at 5 phr IOC-8 and 1 or 3 phr Luperox 231 than at the opposite concentrations, 1 or 3 phr IOC-8 and 5 phr Luperox 231. For the carbon nanofiber system, front velocity with 3 or 5 phr IOC-8 and 1 phr Luperox 231 is higher than 3 or 5 phr Luperox 231 and 1 phr IOC-8. However, the same trend does not continue for a comparison of 5 phr IOC-8 and 3 phr Luperox 231 versus 5 phr Luperox 231 and 3 phr IOC-8; though, the values do appear to lie within the calculated error. Overall, there is an indication of a slightly greater dependence on the IOC-8 concentration, though not as clear as previous reports. Extrusion-Based Additive Manufacturing To enable free-standing printing, the resin must be viscous enough to limit sagging as it is extruded, and the nozzle speed must be coordinated with the front speed to obtain solidification close to the tip while avoiding clogging. For the extrusion and printing studies, resin formulations containing between 2 and 4 wt % FS and CNF were investigated because they possessed suitable viscosity. Viscosity is a critical parameter in 3D printing, influencing the flow behavior of the resin during extrusion and the layering process. As observed in Figure 5 , adding fillers to the resin affected its rheological properties. During initial extrusion-based AM trials, it was experimentally observed that beyond a certain concentration (>4 wt % FS and 4 wt % CNF), the high resin viscosity made it challenging to extrude and print the material with consistent flow (above approximately 288 to 5.2 Pa·s viscosity, from 1 to 100 s –1 shear rate). This resulted in issues such as nozzle clogging, uneven layer deposition, and poor print quality. Therefore, a range of filler weight fractions from 2 to 4 wt % was chosen for the parametric studies. Carbon nanofibers were selected over milled carbon fibers for 3D printing because they increased front velocity more, which is preferable to lower the overall manufacturing time. In addition, less FS was needed to reach a suitable viscosity than formulations with MCF. Parametric Study on Composite Formulations Figure 6 shows a summary of preliminary extrusion experiments to find suitable pressure and printing speed ranges for extrusion of resins containing FS and CNF. Figure 6 a compares extruded filaments under pressures ranging from 0.02 to 0.15 MPa for two nozzle diameters and two resin formulations. The filament exhibited smoother and more consistent behavior as the pressure and nozzle diameter increased. While the extrusion was consistent for the lowest FS and CNF weight fractions (2 wt %), it was observed that the viscosity was too low to ensure the filament would retain its shape after extrusion and deposition (a range from approximately 1.7 to 63 Pa·s, depending on shear rate, as indicated in Figure 5 ). This was noted for 3 wt % formulations as well, confirming that 4 wt % would be the most suitable for free-standing 3D printing. From the rheological measurements described in Section 3.1.2 , the 4 wt % formulation corresponds to a viscosity range from 5.2 to 288 Pa·s at a shear rate of 100 to 1 s –1 . Assuming pipe flow in the nozzle and a flow rate consistent with the printing speed (1.5 to 6 cm min –1 ), the actual shear rate at the nozzle is estimated between 0.13 and 0.5 s –1 . 29 By fitting the data points presented in Figure 5 with a power law function, we can extrapolate the viscosity at the actual shear rates in the printing process between approximately 1790 and 540 Pa·s. The 4 wt % resin formulation was then used to extrude single filaments under 0.02 MPa pressure at different printing speeds (from 1.5 to 6 cm min –1 , a value close to the front velocity) to identify a suitable range for producing filaments with consistent width and thickness while matching the front velocity of the resin system. The results are presented in Figure 6 b, confirming that filament width and thickness decreased with a low standard deviation as the speed increased up to 6 cm min –1 . This indicates that using the 4 wt % resin formulation, in combination with a nozzle diameter of 2 mm and a pressure of 0.02 MPa, could be suitable for free-standing printing as it would be possible to coordinate front and printing speeds. Using a higher pressure would require a higher printing speed to maintain consistent filament extrusion but could exceed the front velocity, leading to filament sagging and unsuccessful free-standing printing. Mechanical Characterization of Composite Specimens As discussed in Section 3.1.1 , CE was added to the TEGDVE-epoxy formulation to improve rigidity while maintaining reactivity and supporting the front propagation. The effect of FS and CNF weight fractions on the tensile properties of the resin system, for as-molded specimens, is shown in Figure 7 a. Representative stress–strain curves for the tensile tests are shown in Figure S7 . The average ultimate strength increased with filler weight fraction, indicating effective reinforcement of the specimens. Similarly, the elastic modulus showed a slight increase as the filler loading increased from 2 to 4 wt %. The strain at break confirmed the rigid, brittle behavior of the resin system at all weight fractions. However, the large standard deviations for strain at break values imply that the difference between weight fractions is not significant. The variations between specimens were likely caused by the porous nature of the resin system after frontal polymerization. Void formation in RICFP systems has been previously reported and is caused by the decomposition of the initiators that produce gas. 40 Representative SEM images of the MCF composites ( Figure 8 ) show details of the composite morphology, with the presence of several voids ( Figure 8 c,d). This morphology was observed for both milled carbon fiber and carbon nanofiber composites. SEM images in Figure 9 show a generally uniform CNF dispersion without large agglomerates. Voids were present on the surface of the specimens as well, potentially creating damage initiation sites. Overall, it is expected that variations in tensile properties mostly depend on the presence of voids. In the literature, tensile properties of RICFP-cured epoxies without and with different fillers (woven carbon fiber plies, 34 multiwalled carbon nanotubes (MWCNTs), 23 or continuous carbon fibers 22 ) were reported. Printed composite specimens containing 1 wt % MWCNT exhibited tensile strength in the same order of magnitude as formulations in Figure 7 a. 22 Reinforcing material and architecture (woven, continuous carbon fibers) significantly increased elastic modulus and tensile strength, 22 , 23 , 34 but generally, comparable or higher front velocities were obtained with our 4 wt % formulation (above 5 cm min –1 ). The influence of the manufacturing approach on the flexural behavior of composite specimens was assessed through 3PB. Figure 7 b compares the flexural strength of molded specimens and 3D-printed specimens in the longitudinal and transverse directions. It shows that molded specimens possessed the highest strength, followed by 3D-printed specimens in the longitudinal and then transverse directions. It is expected that the molded specimens displayed the highest strength because the fabrication process involved the compression of the specimen between two acrylic sheets. This created more even front propagation and surfaces compared to 3D-printed specimens, for which the bottom surface in contact with the printing bed exhibited a more porous morphology. The reduced strength of 3D-printed specimens may be further attributed to adhesion issues between adjacent filaments and possible defects introduced by voids, especially for those printed in the transverse direction. The lowest strength in the transverse direction could also be explained by preferential CNF alignment along the extruded filaments, leading to lower flexural properties compared to the longitudinal direction, as suggested in the literature for different frontal polymerization systems. 20 The void content of the polymers produced by frontal polymerization was estimated using gravimetric density measurements based on ASTM D2734, with limitations of sample size and a slight modification of the provided equation to replace the theoretical composite density with a measured composite density of a minimal void sample. Comparing densities of oven-cured samples with minimal voids to the densities of samples produced by frontal polymerization with many voids, it was found that samples with 4 wt % CNF and 4 wt % FS had an average 41 ± 4.1% void content, while samples with 4 wt % MCF and 7 wt % FS had an average 58 ± 5.9% void content. Both results indicate that there is a high number of voids in the polymer, which can explain some of the variability in the measured mechanical performance. There was some difficulty producing a reference sample to obtain the density without voids present. Future investigations of the void content and methods to reduce it are recommended to use microcomputed tomography for porosity analysis. Voids could be reduced by further optimization of initiator concentration in these systems since the decomposition of the initiators is the biggest contributor to the issue. Another potential solution could be the introduction of fillers to act as nucleation sites for bubbles and generate a more uniform void content. Benzopinacol could also be used as a thermal radical initiator, which has been shown to produce no gas, 16 but can be difficult to dissolve in these epoxy-vinyl ether resins, requiring solvents that can negatively impact front propagation. 23 , 25 Luperox 231 does not have this issue of solubility as it is a liquid peroxide. Free-Standing Additive Manufacturing Demonstration Figure 10 shows different 3D printing geometries to assess the feasibility of printing approaches. Layer-by-layer cylindrical geometries were first 3D-printed with a robotic AM system ( Figure 10 a) to assess the main issues. Initially, coordination between the front velocity and extruder speed was attempted by selecting a printing speed of 6 cm min –1 . However, as layers were deposited on top of existing layers, which remained at high temperature, the front propagated through the syringe tip, clogging the extruder. To avoid this issue, a higher printing speed between 40 and 45 cm min –1 was used, which allowed successful fabrication of the planar specimen. For free-standing printing, the main challenge was to achieve solidification of the filament as it is extruded by maintaining an adequate distance between the nozzle and front. A print speed that closely matches the front velocity is optimal for this process. Several trials were performed to study the effect of reaction initiation with a soldering iron and the printing speed. Single filaments were extruded at a 40° angle at printing speeds between 5.0 and 5.5 cm min –1 , as shown in Figure 10 e. Figure 10 e-i,ii compares different reaction initiation delays at the same printing speed. A longer reaction initiation delay, where the distance between the nozzle and the front was above 10 mm, led to a specimen geometry and angle significantly deviating from the planned path at 40°. A printing speed of 5.2 cm min –1 ( Figure 10 e-iii,iv) with a reaction initiation delay between approximately 5 and 7 mm allowed free-standing printing of filaments with angles between 38° ± 2° and 40° ± 2°. However, as the reaction initiation method required contact with the filament upon extrusion, the base of the filaments was inconsistent. Finally, a higher printing speed of 5.5 cm min –1 , along with a reaction initiation delay lower than 5 mm, showed a curved filament shape. This indicates the reaction was initially well-coordinated with the nozzle, but the distance between the nozzle and front increased over time due to the printing speed. From those initial trials, a printing speed of 5.2 cm min –1 was selected, and the reaction initiation method was improved by using two UV spotlights to eliminate physical contact. Helical geometries were then manufactured to demonstrate the feasibility of free-standing 3D printing with the proposed resin system ( Figure 10 b–d). Future work should address void formation during frontal polymerization to create structures with higher dimensional stability. Depending on the printed geometry, print speed must be adjusted to prevent front propagation between layers, which would lead to failure of the print. This is shown by the significant differences between the print speeds of free-standing helical ( Figure 10 b–d) and layered cylindrical ( Figure 10 a) structures.
Conclusions In this work, the optimization and application of a formulation that uses RICFP to 3D print free-standing structures was studied. During the process of optimization, it was found that increasing the ratio of tri(ethylene glycol) divinyl ether to BADGE increased the front velocity substantially. However, with this increasing ratio, the polymer became more flexible than the stiff polymer that was desired. To balance the reactivity and physical properties, a cycloaliphatic epoxy was added. The effect of the initiator concentration was studied and shown to have a slightly greater dependence on the IOC-8 concentration than the Luperox 231 concentration. The addition of filler can increase the viscosity and affect front kinetics, as seen with the addition of milled carbon fibers and carbon nanofibers, which are both thermally conductive and have high thermal diffusivity. Carbon nanofibers were found to affect the front velocity more than milled carbon fibers, which is likely due to the higher surface area of carbon nanofibers, differences in thermal conductivity of the fillers, or the lower loadings of the fillers compared to previous reports in the literature. Interestingly, for both fillers, the front velocity was higher in a 1.5 mm layer rather than a 6 mm layer. To demonstrate extrusion-based additive manufacturing, a desktop 3D printer was modified to control resin extrusion and deposition using a digital syringe dispenser. A parametric study compared the effect of air pressure, nozzle diameter, and filler weight fraction on extrusion behavior, revealing that a formulation containing 4 wt % FS and 4 wt % carbon nanofibers was the most suitable for free-standing printing. The main goal was to achieve a viscosity for consistent extrusion through the nozzle while avoiding sagging of the extruded material at the nozzle tip. An air pressure of 0.02 MPa allowed the extrusion of dimensionally stable filaments at a printing speed matching the front velocity (between 5 and 6 cm min –1 ). Flexural properties of molded and 3D-printed specimens were obtained through 3PB tests, showing that specimens printed in the transverse direction exhibited the lowest strength. This is likely due to the presence of voids within and between filaments, adhesion issues, and preferential carbon nanofiber alignment along the filaments. SEM confirmed the porous morphology, resulting from the decomposition of the initiators, producing gas during frontal polymerization. Determination of void content by density measurements showed a high average void content (41 ± 4.1% for CNF samples and 58 ± 5.9% for MCF samples). Finally, free-standing printing was successfully demonstrated with single, angled filaments and helical geometries. Future work should focus on reduction of void formation, which would improve the mechanical performance of 3D-printed specimens. In addition, a more detailed study is needed to investigate the causes of increased front velocities at thinner layers with carbon nanofibers and milled carbon fibers and the interactions of the carbon-based fillers with the resin system.
The application of frontal polymerization to additive manufacturing has advantages in energy consumption and speed of printing. Additionally, with frontal polymerization, it is possible to print free-standing structures that require no supports. A resin was developed using a mixture of epoxies and vinyl ether with an iodonium salt and peroxide initiating system that frontally polymerizes through radical-induced cationic frontal polymerization. The formulation, which was optimized for reactivity, physical properties, and rheology, allowed the printing of free-standing structures. Increasing ratios of vinyl ether and reactive cycloaliphatic epoxide were found to increase the front velocity. Addition of carbon nanofibers increased the front velocity more than the addition of milled carbon fibers. The resin filled with carbon nanofibers and fumed silica exhibited shear-thinning behavior and was suitable for extrusion-based printing at a weight fraction of 4 wt %. A desktop 3D printer was modified to control resin extrusion and deposition with a digital syringe dispenser. Flexural properties of molded and 3D-printed specimens showed that specimens printed in the transverse direction exhibited the lowest strength, likely due to the presence of voids, adhesion issues between filaments, and preferential carbon nanofiber alignment along the filaments. Finally, free-standing printing of single, angled filaments and helical geometries was successfully demonstrated by coordinating ultraviolet-based reaction initiation, low air pressure for resin extrusion, and printing speed to match front velocity.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsapm.3c02226 . Mechanism of radical-induced cationic frontal polymerization, differential scanning calorimetry curves for the system containing the printing resin, effects of initiator concentration on front velocity, and representative stress–strain curves obtained from tensile testing of the molded frontal polymerization specimens ( PDF ) Supplementary Material The authors declare no competing financial interest. Acknowledgments This work was supported by the National Science Foundation under Grant 2051050 (REU SMART Polymers), the US National Science Foundation under Grant OIA-1946231, the Louisiana Board of Regents for the Louisiana Materials Design Alliance (LAMDA), the NASA EPSCoR Reasearch Award Program grant (under Cooperative Agreement Number 80NSSC19M0055), and the Louisiana Transportation Research Center TIRE grant (Project 24-1TIRE, Task Order No. DOTLT1000496). Dr. Corina Barbalata and Luis Velazquez from the Department of Mechanical & Industrial Engineering at Louisiana State University are graciously acknowledged for their assistance with robotic 3D printing. Scanning electron microscopy and energy dispersive X-ray spectroscopy of carbon fiber composites were performed at the Shared Instrumentation Facility (SIF) at Louisiana State University.
CC BY
no
2024-01-16 23:45:29
ACS Appl Polym Mater. 2023 Dec 15; 6(1):572-582
oa_package/e8/72/PMC10788858.tar.gz
PMC10788859
38147583
Introduction Photodetectors (PDs) are ubiquitous electronic devices in modern technology that harvest optical signals and convert them into electrical responses. 1 − 5 Depending on the device architecture, PDs usually require an external bias to assist the directional movement of the photogenerated charge carriers (electrons and/or holes) to generate photocurrent. 6 , 7 However, by engineering the energy bands of the constituent materials, an internal electric field at their interface can be produced and utilized as the driver for the photo-generated carriers. Self-powered PDs operate under zero externally applied bias and are deemed technologically important for low-power electronics such as image sensing and optical communications. 8 , 9 Thus, self-powered PDs not only reduce the cost of the devices but also greatly reduce the size of the whole integrated system. 10 − 14 The traditional approach to designing self-powered PDs is the p–n homojunction assembly, heterojunctions, and Schottky junctions by taking advantage of their photovoltaic effects. 12 , 15 − 18 Among these junctions, metal/semiconductor interfaces have prompted growing scientific interest. However, the construction of Schottky junction PDs is limited by the type of 2 dimensional (2D) material and metal electrodes, and it is necessary to find a pair of suitable metal electrodes for asymmetrical contact engineering. 17 , 19 − 22 On the other hand, polarization-sensitive PDs are demanding in state-of-the-art technology including quantum computing, efficient three-dimensional object detection in light detection and ranging (LiDAR) devices, high-density optical signal processing, navigation, and high-contrast polarizers. 8 , 9 Detection of optical polarization utilizing optically anisotropic materials has attracted increasing research interest in recent years. Low-charge symmetry crystals such as layered van der Waals materials including black phosphorus, GaTe, GeS, SnS, SnSe, ReSe 2 , ReS 2 , and GeAs 2 , exhibit in-plane optical anisotropy and birefringence making them promising materials for the detection of optical polarization. 23 − 29 However, poor air stability and oriented growth are among the several shortcomings that limit their performance and practicality. 2 , 30 The investigation of stable low-symmetry 2D materials and their polarization-selective light–matter interaction and the interplay between their layer thickness, structural, and optical anisotropy are still in their early stage. 31 Recently discovered Janus materials provide additional freedom to introduce an optically anisotropic response in monolayer 2D materials. In particular, newly prepared CrSBr exhibits an air-stable optically anisotropic semiconducting nature with a band gap of 1.5 eV. 32 − 34 The exfoliated CrSBr yields elongated flakes along the b -axis with a high aspect ratio. This peculiarity in the exfoliated flakes causes in-plane optical anisotropy and directional dependence that can affect the electron–phonon and light–matter interactions. 32 − 34 Moreover, the puckered crystal structure of CrSBr is a crucial feature, causing the breaking of the sublattice symmetry for optical anisotropy and thus resulting in layer-dependent polarization-sensitive properties. 23 , 32 , 35 , 36 In this work, we address several aforementioned challenges in a rather simply designed device based on a few layers of CrSBr in a linear Au/CrSBr/Au architecture. While CrSBr enables a strongly polarization-sensitive photoresponse, the Schottky potential developed in both Au/CrSBr and CrSBr/Au facilitates a self-power device that exhibits a position-sensitive binary photoresponse. We have systematically investigated the polarization-resolved and self-powered device performance under a series of excitation sources from 488 to 633 nm demonstrating broadband photoresponse. The position sensitivity of the photocurrent has been found to be ∼0.37 nA/μm under 514 nm excitation of intensity 1.41 mW/cm 2 . Additionally, a highly polarization-sensitive photoresponse originating from the polarization-sensitive light–matter interaction has been studied. The optical polarization-dependent photocurrent was further consolidated by the statistical average over several samples, evaluating device performance of CrSBr on different substrates and over a long time (150 days) as well as employing spectroscopic characterization such as Raman and photoluminescence (PL) spectroscopy. Our work thus provides a viable route to fabricate high-performance self-biased polarization-sensitive PDs, avoiding the complex requirement of device fabrication and interface engineering.
Methods Device Fabrication The single-crystal CrSBr PD was fabricated by a simple, dry transfer method, with a highly p-doped silicon wafer covered with 300 nm thick SiO 2 used as a substrate. The metal electrode patterns were fabricated on the SiO 2 /Si substrate by direct lithography (MicroWriter, Durham Magneto Optics Ltd.) followed by metal lift-off. For the electrode, optimized layers of Cr (5 nm) and Au (45 nm) metals were deposited on the silicon substrate by sputtering. Finally, lift-off was performed in hot acetone and rinsed by IPA. Next, a multilayer CrSBr was exfoliated on a thin PDMS membrane from the bulk counterparts using Nitto tape. The targeted CrSBr flake on the PDMS was transferred onto the patterned silicon substrate using the micropositioning arm of a transfer stage. The device was imaged using an optical microscope, with the optical parameters adjusted to observe the contrast of CrSBr with SiO 2 /Si. Device Characterization Raman spectra were recorded at room temperature by a WITec micro-Raman system (alpha300R) equipped with a 532 nm laser. The thickness of the exfoliated layers was obtained by AFM (Bruker, Dimension Icon). The electrical properties of the fabricated devices were measured in a homemade probe station (HFS600E-PB4) integrated with a source meter (Keithley 2610). To perform photocurrent measurements, the 514 nm laser was focused on the device with a 100× objective, and the laser spot was nearly flat with a size of about ∼500 nm. For polarization-dependent measurements, we used a half-wave plate, which can turn an incident unpolarized light into polarized light, and was placed between the light source and the PD. The power and polarization of the incident light were controlled by using a variable neutral density filter and the half-wave plate, respectively. We first checked the polarization direction of the laser with an optical polarimeter (Thorlabs), which showed a perfectly polarized laser signal, which we referenced with the crystalline axes of the samples utilizing the optical microscope and a manual goniometer. The data for different polarization angles were obtained by changing the goniometer angle. All of the measurements were carried out at room temperature at an ambient environment.
Results and Discussion Materials and Device Design Multilayer CrSBr flakes were mechanically exfoliated from a bulk crystal grown by the chemical vapor transport technique in quartz ampoule from elements. 37 Figure 1 a shows the layered crystal structure of CrSBr, which belongs to the Pmmn ( D 2h ) orthorhombic space group. Each buckled CrS plane is sandwiched between the top and bottom Br atoms, and these layers stack through the van der Waals interaction along the c -axis. To explore the nature of the phonon modes, Raman spectroscopic characterization of the crystal was performed by using a 532 nm excitation laser wavelength. Figure 1 b shows the typical Raman spectra of an ∼82 nm CrSBr flake, with the incident laser linearly polarized along the a and b crystalline axes. Three primary Raman peaks, assigned as P 1 (≈110 cm –1 ) for the A 1g 1 phonon mode, P 2 (≈245 cm –1 ) for the A 1g 2 mode, and P 3 (≈343 cm –1 ) for the A 1g 3 mode, were observed along the b -axis (as discussed later). Only a single peak at 245 cm –1 was found in the spectra measured with the polarization along the a -axis, consistent with previous reports. 38 All Raman peaks correspond to the out-of-plane A 1g vibrational modes, which aligns with the literature. 38 The PL spectra under 532 nm excitation at room temperature show a strong emission centered at around 1.31 eV ( Figure 1 c), which originates from the band-edge emission rather than localized ligand-field luminescence. 38 − 40 The thickness-dependent Raman and PL spectra of the CrSBr are shown in Figure S1 of the Supporting Information, corroborating previous reports. 34 , 38 The exact thickness of the flakes was determined by AFM topography. Figure 1 d shows a typical AFM image of exfoliated few-layer CrSBr transferred on prepatterned gold electrodes on the SiO 2 /Si substrate, which was further used as a phototransistor device at the ambient conditions. Optoelectronic Characterization of the Device under Unpolarized Photon Flux The current–voltage ( I–V ) characteristics of the device under dark conditions are shown in Figure 2 a; the inset shows an optical microscopy image of the device. The I–V curve confirms a typical nonlinear diode-like characteristic, indicating the formation of Schottky junctions at the interface of semiconducting CrSBr and Au electrodes. 41 , 42 The I – V characteristics of CrSBr under dark and light of wavelength 514 nm have been shown in Supporting Information Figure S2 . To elucidate the self-powered photodetection properties of the device, the transient device current was recorded under periodic illumination of a 514 nm laser at different excitation intensities without applying any external bias ( V SD = 0 V). Figure 2 b shows the transient net photocurrent ( I ph ) of the device measured under varying excitation power, where the net photocurrent is defined as the difference between the net device current under illumination ( I l ) and dark conditions ( I d ), that is, I ph = I l – I d . It can be seen that the photocurrent increased with the illumination power, which is consistent with the previous measurements of both self-powered devices and devices to which an external bias was applied. 43 − 45 The photocurrent is generated purely due to the contribution of the potentials generated at the electrode/CrSBr interface. Importantly, the generated potential has opposite polarities, further limiting the circuit’s photocurrent. Thus, the obtained photocurrent is not comparable with traditional p–n junction-based self-biased devices. However, in order to obtain a qualitative comparison of the device performance figure of merit of this device, we performed an external bias-dependent photocurrent measurement as shown in Figure S3 in the Supporting Information, which shows a photocurrent value up to 20 nA under the excitation of 1.4 mW/cm 2 power density. The self-powered behavior of the device originates from the lateral photovoltaic effect due to a built-in electric field resulting from the band offset at the metal–semiconductor Schottky junction (Au–CrSBr and CrSBr–Au) interfaces. 44 , 46 , 47 Note that both of the electrodes forming the Schottky junction have opposite built-in electric fields ( Figure 2 c (i)), which should nullify the photocurrent for uniform illumination over the whole device area. However, in our confocal measurement setup, the excitation laser spot size (of diameter ∼500 nm) is smaller than the device active area (length 5 μm), which enabled us to map the photocurrent over the device area by selectively exciting the specific Schottky junction associated with the individual electrodes ( Figure 2 c (ii) and (iii)). In this situation, the photoexcited electrons cause a transient change of band offset at the illuminated Schottky junction, which drives the photogenerated carriers contributing to the photocurrent. Here, the second Schottky junction remains as a passive resistor to the photocurrent, which partially reduces the photocurrent magnitude. 48 A gradual change in the position of the illuminated area from the Au electrode toward the center of the device leads to reduced photocurrent as the built-in potential has the strongest value at the Au–CrSBr interface and it gradually decays ( Figure 3 ). 48 As expected, at the center of the device, where the distance between the excitation spot and both electrodes is the same, the magnitude of the photocurrent becomes insignificant. However, when the excitation spot came closer to the second electrode, an opposite polarity of the photocurrent was observed, and the magnitude of the photocurrent increased with a gradual approach toward the second electrode. This is because, in this state, the photocurrent is associated with the Schottky junction at the second electrode, which generates an electric field in the opposite direction. Note that the magnitude of the photocurrent varied slightly, which could be due to variations of local resistance under illumination and under dark conditions. The change in photocurrent as a function of position at 0 V bias is also shown in the Supporting Information ( Figure S4 ). To consolidate our observation, we recorded position-sensitive measurements while applying bias (source–drain) voltage in both directions. In this case, a nearly constant photocurrent was observed for both directions of applied bias. This is because the applied bias drives the photogenerated charge carriers instead of the built-in electric field in this measurement mode. Hence, the applied bias removes the position sensitivity of the photocurrent as the applied bias has a higher magnitude than the Schottky potential. The proposed mechanism can be analytically modeled as follows. In this device, the photon absorption and photocarrier generation are dominated by the CrSBr layer. The total number of photogenerated electrons, n 0 , and that of photogenerated electrons transmitted through the metal–semiconductor interface, N 0 , can be expressed as follows 49 , 50 where P is the probability of photogenerated electrons entering the interface, p is the laser power, and τ is the lifetime of the photogenerated charge carriers in the intrinsic CrSBr layer. The electron diffusion equation at position r can be expressed as From these equations, the electron density at r and N ( r ) can be derived as follows where is the diffusion length, , and ρ is the diffusion constant and resistivity of the semiconductor, respectively; τ n is the electron diffusion lifetime, is the electron density below the Fermi level ( E F 0 ), and x is the position of the laser point. Thus, the Fermi level of the semiconductor after laser irradiation at position r can be expressed as follows This transient change in charge concentration at the excitation point strongly modulates the barrier height at the interface, which drives the photocurrent. Using the above equations, an expression of the LPE at a laser irradiation position can be obtained as follows where coefficient and L and – L are the positions of the two electrodes. Therefore, the obtained photocurrent in the opposite electrodes has a good exhibit of symmetric values with the laser spot position, providing suitable ways of position-sensitive detection. 51 , 52 Device Performance For practical applications, self-powered Schottky junction-based PDs need to meet various criteria, like an easy and cheap fabrication process, wide band photodetection, and uncomplicated integration with CMOS technologies. 14 There are also important figures of merit to evaluate the performance of PDs, such as the photoresponsivity ( R ), detectivity ( D ), and gain (η) of the PD. We recorded the output characteristics and photocurrent of the device by systematically varying the illumination power intensity. The ratio of the extracted photocurrent to the dark current, known as the photocurrent on/off ratio, is plotted as a function of the incident power in Figure 4 a. It can be seen that the photocurrent on/off ratio systematically increases with the excitation power, which is attributed to the increased density of the free e–h pairs in the CrSBr due to the band-to-band transition of the photogenerated carriers. 2 , 53 Note that the measured on/off ratio is limited by the sensitivity of our current source meter, which allows the dark current to be measured only down to 1 nA. Broadband photoresponse of the devices has been studied utilizing 488, 514, 568, and 633 nm excitation sources of an Ar–Kr laser. The obtained photoresponse under a constant power density has been provided in Figure S5 . The photoresponsivity of the detector is defined as the photocurrent generated per unit incident power on the effective area of the PD, which can be expressed as , where I ph is the net photocurrent and P is the incident power on the device. 2 , 53 Figure 4 a presents the calculated photoresponsivity as a function of the incident power density at zero bias voltage. The highest value of the photoresponsivity found was ∼0.26 mA/W for 0.42 mW/cm 2 at zero bias voltage. The value of R shows a negative dependence on the incident power, which is consistent with the previously obtained results. 7 , 54 , 55 With increasing illumination intensity, the number of photocarriers (e–h pairs) increases, which induces more recombination of the photogenerated excitons due to high exciton binding energy. 56 , 57 The detectivity describes the smallest detectable signal. Assuming that shot noise is the major factor contributing to the dark current, the detectivity can be expressed as , where R is the photoresponsivity, A is the area of the incident light on the PD, e is the elementary charge, and I d is the dark current. The variation of the detectivity of the PD with the incident power density is shown in Figure 4 b. The value of D decreases with increasing incident power and the maximum value of D obtained was ∼3.4 × 10 8 Jones. The change of D with laser power can be explained by the linear dependence of D on R and a constant dark current. Note that the laser heating of the device channel has been assumed to be negligible. 2 , 58 The photocurrent gain is a dimensionless figure of merit of the device given by the number of photoexcited carriers generated per unit photon. Assuming that all incident photons are absorbed in the device active area, η can be expressed as , where h is the Planck constant, c is the speed of light in free space, and λ is the wavelength of the incident photon beam. The variation of the gain as a function of the incident power is shown in Figure 4 b. The maximum value of η was ∼6.2 × 10 –2 % at the pump fluence of 0.42 mW/cm 2 in self-powered conditions, thus demonstrating the relatively high efficiency of the device for light detection. 46 The transient photoresponse and recovery time of the devices have been found to be nearly 52 and 38 ms, respectively, which is faster than typical monolayer semiconductor transition metal dichalcogenide-based devices, where a prolonged photoresponse in planar configuration is observed. 56 , 59 Polarization-Resolved Photodetection at the Ambient Conditions CrSBr is suitable for high-sensitivity polarization detection due to its low-symmetry crystal structure, which is similar to that of black phosphorus, 60 CrPS 4 , 61 and GeSe 2 . 62 Thus, we performed further polarization-dependent photocurrent measurements on the same devices under the excitation of a linearly polarized 514 nm photon beam. A schematic illustration of the experimental setup is given in Figure 5 a. The details of the experimental setup are discussed in the Methods Section . The photocurrent was measured under the excitation of the polarized laser beam while varying the angle of the incident from 0 to 360° with respect to the crystalline axes of the single crystalline CrSBr. The step interval was 10° and no external bias was applied. Note that the excitation laser illuminated the area next to one of the Au electrodes throughout the whole measurement. We further evaluated the corresponding angle-dependent photoresponsivity, gain, and detectivity, as shown in Figure 5 b,c, respectively, which exhibit quasi-sinusoidal behavior with respect to the polarization angle of the incident photon beam. We further calculated the photocurrent anisotropic ratio r = ( I ph-max – I ph-min )/( I ph-max + I ph-min ) and the dichroic ratio d = I ph-max / I ph-min , which were found to be 0.65 and 3.43, respectively, for an incident power density of 1.4 mW/cm 2 , indicating very high sensitivity to the polarization of the incident photons even at the self-powered conditions. Polarization-Resolved PL at the Ambient Conditions We also performed the polarization-dependent PL measurement under ambient conditions. Figure 6 a shows the PL spectra recorded for the excitation of the 514 nm laser beam at different polarization angles with respect to the crystalline axes of the CrSBr. The origin of the PL peak centered at 950 nm (∼1.31 eV) can be traced to 1s excitonic recombination, as suggested previously. 63 In Figure 6 b, we showcase a polar plot of the PL intensity of the 1s exciton peak, which clearly exhibits a dumbbell-shaped anisotropic nature with a 180° variation period and intensity maxima at 90 and 270° (along the b -direction). It can be seen that the optical band-to-band transitions are allowed only along the b -direction. To understand this peculiar directional dependence of the PL, one has to consider the electronic band structure of CrSBr. 32 , 33 , 63 It is known from the theoretical calculations that the large band gap possesses highly pronounced anisotropy in the Γ– Y direction (along the b -direction of the crystallographic axis), which vanishes completely along the Γ– X direction (along the a -direction of the crystallographic axis). 14 , 15 , 36 This implies that interband transitions are forbidden along the a -axis from the conduction band maxima and valence band minima at Γ. 32 , 33 , 63 We further performed polarization-dependent spatially resolved PL measurements with a 532 nm excitation. For this experiment, we used two nearly perpendicular CrSBr flakes exfoliated on the SiO 2 /Si substrate, as shown in the insets of Figure 6 c,d. In Figure 6 c, the incident laser polarization is aligned with the flake orientation at the top along the b -axis and hence we observe strong luminescence, while it is negligible for the flake at the bottom in a perpendicular direction. For the experiment shown in Figure 6 d, we kept the polarization of the laser fixed and rotated the sample stage by 90° (see the insets of Figure 6 c,d). For this scenario, we observed that the b -axis of the flake at the bottom was now aligned to the polarization of the incident laser, and it showed strong luminescence while the top flake did not. Polarization-Resolved Raman Spectroscopy Raman spectroscopy is a powerful tool to characterize anisotropic materials. The scattered light intensity ( Ĩ ) is dependent on the polarization of the incident light ( e i ) and the scattered light ( e s ) via the Raman tensor (Γ) for the corresponding Raman modes. According to the Placzek model, the Raman scattering intensity is given by Î ∝ | e i Γ· e s | 2 , where e i and e s are the polarization unit vectors for the incident and scattered light and Γ is the Raman tensor for the Raman active modes. 60 , 64 Hence, it is possible to extract information about the crystallographic anisotropy for any material by considering the polarization-dependent Raman response. Figure 7 shows the polarization-dependent Raman spectra of multilayer CrSBr flakes. Figure 7 a,b shows stack plots of the polarization-dependent Raman spectra covering both a and b directions for A 1g 2 and A 1g 3 at the 245 and 343 cm –1 modes, respectively. The corresponding polar plots of the two principal Raman modes, A 1g 2 and A 1g 3 , are shown in Figure 7 c,d. The measurement was carried out by coaligning the linearly polarized excitation light with the detector polarization. We observe that the modes A 1g 1 (not shown here) and A 1g 3 for our multilayer CrSBr crystal show anisotropic vibrational response (dumbbell-shaped) with a 180° variation period and intensity maxima at 90° and 270° along crystallographic orientation b (direction b ). However, the A 1g 2 mode is an exception, as it is observed to be rotated by 90° as the scattered intensity shows maxima corresponding to 0 and 180° along crystallographic orientation a (direction a ) with a 180° variation period. The anisotropic polarization response in this case for the A 1g 2 mode along the a -axis can be attributed to the quasi-1D-like electron–phonon interaction based on the electronic structure of CrSBr, which is strongly influenced by coupling of the high-density states of the electronic system along the symmetrical Γ– X direction. This anisotropic directional dependence of the different Raman modes is consistent with recent observations of the single crystal of CrSBr having an orthorhombic structure with D 2h symmetry. 63 The observable out-of-plane A 1g modes can be well fitted with the equations of the Raman scattering intensity obeying the relation ( s 2 sin 2 θ + c 2 cos 2 θ 2 ) for the parallel configuration. This corresponds to the experimental setup, in which the polarization direction of the scattered light is normal to the plane without the presence of the B g Raman modes. 25 Hence, the obtained PL and Raman spectra corroborate the anisotropic responses of the A 1g 2 and A 1g 3 Raman modes of the crystal. Thus, we confirm that the polarization-dependent behavior is an intrinsic property of the CrSBr. In addition, the obtained r and d values of our PD are larger than those for previously reported devices based on other 2D materials. 62 , 65 − 72 In Figure 8 a, we have presented a comparison of the reported dichroic ratio of the PD devices based on few-layer 2D materials, for which our CrSBr device attains a value of 3.4, one of the highest reported so far. The measured devices exhibit a highly stable and reproducible performance. To test the long-term stability of the device, we have also recorded photocurrent data under ambient conditions after a period of ∼5 months ( Figure 8 b). We did not observe any significant change in the device performance during this period. These results thus demonstrate the great potential of 2D CrSBr for application in self-powered linear dichromatic optoelectronic devices.
Results and Discussion Materials and Device Design Multilayer CrSBr flakes were mechanically exfoliated from a bulk crystal grown by the chemical vapor transport technique in quartz ampoule from elements. 37 Figure 1 a shows the layered crystal structure of CrSBr, which belongs to the Pmmn ( D 2h ) orthorhombic space group. Each buckled CrS plane is sandwiched between the top and bottom Br atoms, and these layers stack through the van der Waals interaction along the c -axis. To explore the nature of the phonon modes, Raman spectroscopic characterization of the crystal was performed by using a 532 nm excitation laser wavelength. Figure 1 b shows the typical Raman spectra of an ∼82 nm CrSBr flake, with the incident laser linearly polarized along the a and b crystalline axes. Three primary Raman peaks, assigned as P 1 (≈110 cm –1 ) for the A 1g 1 phonon mode, P 2 (≈245 cm –1 ) for the A 1g 2 mode, and P 3 (≈343 cm –1 ) for the A 1g 3 mode, were observed along the b -axis (as discussed later). Only a single peak at 245 cm –1 was found in the spectra measured with the polarization along the a -axis, consistent with previous reports. 38 All Raman peaks correspond to the out-of-plane A 1g vibrational modes, which aligns with the literature. 38 The PL spectra under 532 nm excitation at room temperature show a strong emission centered at around 1.31 eV ( Figure 1 c), which originates from the band-edge emission rather than localized ligand-field luminescence. 38 − 40 The thickness-dependent Raman and PL spectra of the CrSBr are shown in Figure S1 of the Supporting Information, corroborating previous reports. 34 , 38 The exact thickness of the flakes was determined by AFM topography. Figure 1 d shows a typical AFM image of exfoliated few-layer CrSBr transferred on prepatterned gold electrodes on the SiO 2 /Si substrate, which was further used as a phototransistor device at the ambient conditions. Optoelectronic Characterization of the Device under Unpolarized Photon Flux The current–voltage ( I–V ) characteristics of the device under dark conditions are shown in Figure 2 a; the inset shows an optical microscopy image of the device. The I–V curve confirms a typical nonlinear diode-like characteristic, indicating the formation of Schottky junctions at the interface of semiconducting CrSBr and Au electrodes. 41 , 42 The I – V characteristics of CrSBr under dark and light of wavelength 514 nm have been shown in Supporting Information Figure S2 . To elucidate the self-powered photodetection properties of the device, the transient device current was recorded under periodic illumination of a 514 nm laser at different excitation intensities without applying any external bias ( V SD = 0 V). Figure 2 b shows the transient net photocurrent ( I ph ) of the device measured under varying excitation power, where the net photocurrent is defined as the difference between the net device current under illumination ( I l ) and dark conditions ( I d ), that is, I ph = I l – I d . It can be seen that the photocurrent increased with the illumination power, which is consistent with the previous measurements of both self-powered devices and devices to which an external bias was applied. 43 − 45 The photocurrent is generated purely due to the contribution of the potentials generated at the electrode/CrSBr interface. Importantly, the generated potential has opposite polarities, further limiting the circuit’s photocurrent. Thus, the obtained photocurrent is not comparable with traditional p–n junction-based self-biased devices. However, in order to obtain a qualitative comparison of the device performance figure of merit of this device, we performed an external bias-dependent photocurrent measurement as shown in Figure S3 in the Supporting Information, which shows a photocurrent value up to 20 nA under the excitation of 1.4 mW/cm 2 power density. The self-powered behavior of the device originates from the lateral photovoltaic effect due to a built-in electric field resulting from the band offset at the metal–semiconductor Schottky junction (Au–CrSBr and CrSBr–Au) interfaces. 44 , 46 , 47 Note that both of the electrodes forming the Schottky junction have opposite built-in electric fields ( Figure 2 c (i)), which should nullify the photocurrent for uniform illumination over the whole device area. However, in our confocal measurement setup, the excitation laser spot size (of diameter ∼500 nm) is smaller than the device active area (length 5 μm), which enabled us to map the photocurrent over the device area by selectively exciting the specific Schottky junction associated with the individual electrodes ( Figure 2 c (ii) and (iii)). In this situation, the photoexcited electrons cause a transient change of band offset at the illuminated Schottky junction, which drives the photogenerated carriers contributing to the photocurrent. Here, the second Schottky junction remains as a passive resistor to the photocurrent, which partially reduces the photocurrent magnitude. 48 A gradual change in the position of the illuminated area from the Au electrode toward the center of the device leads to reduced photocurrent as the built-in potential has the strongest value at the Au–CrSBr interface and it gradually decays ( Figure 3 ). 48 As expected, at the center of the device, where the distance between the excitation spot and both electrodes is the same, the magnitude of the photocurrent becomes insignificant. However, when the excitation spot came closer to the second electrode, an opposite polarity of the photocurrent was observed, and the magnitude of the photocurrent increased with a gradual approach toward the second electrode. This is because, in this state, the photocurrent is associated with the Schottky junction at the second electrode, which generates an electric field in the opposite direction. Note that the magnitude of the photocurrent varied slightly, which could be due to variations of local resistance under illumination and under dark conditions. The change in photocurrent as a function of position at 0 V bias is also shown in the Supporting Information ( Figure S4 ). To consolidate our observation, we recorded position-sensitive measurements while applying bias (source–drain) voltage in both directions. In this case, a nearly constant photocurrent was observed for both directions of applied bias. This is because the applied bias drives the photogenerated charge carriers instead of the built-in electric field in this measurement mode. Hence, the applied bias removes the position sensitivity of the photocurrent as the applied bias has a higher magnitude than the Schottky potential. The proposed mechanism can be analytically modeled as follows. In this device, the photon absorption and photocarrier generation are dominated by the CrSBr layer. The total number of photogenerated electrons, n 0 , and that of photogenerated electrons transmitted through the metal–semiconductor interface, N 0 , can be expressed as follows 49 , 50 where P is the probability of photogenerated electrons entering the interface, p is the laser power, and τ is the lifetime of the photogenerated charge carriers in the intrinsic CrSBr layer. The electron diffusion equation at position r can be expressed as From these equations, the electron density at r and N ( r ) can be derived as follows where is the diffusion length, , and ρ is the diffusion constant and resistivity of the semiconductor, respectively; τ n is the electron diffusion lifetime, is the electron density below the Fermi level ( E F 0 ), and x is the position of the laser point. Thus, the Fermi level of the semiconductor after laser irradiation at position r can be expressed as follows This transient change in charge concentration at the excitation point strongly modulates the barrier height at the interface, which drives the photocurrent. Using the above equations, an expression of the LPE at a laser irradiation position can be obtained as follows where coefficient and L and – L are the positions of the two electrodes. Therefore, the obtained photocurrent in the opposite electrodes has a good exhibit of symmetric values with the laser spot position, providing suitable ways of position-sensitive detection. 51 , 52 Device Performance For practical applications, self-powered Schottky junction-based PDs need to meet various criteria, like an easy and cheap fabrication process, wide band photodetection, and uncomplicated integration with CMOS technologies. 14 There are also important figures of merit to evaluate the performance of PDs, such as the photoresponsivity ( R ), detectivity ( D ), and gain (η) of the PD. We recorded the output characteristics and photocurrent of the device by systematically varying the illumination power intensity. The ratio of the extracted photocurrent to the dark current, known as the photocurrent on/off ratio, is plotted as a function of the incident power in Figure 4 a. It can be seen that the photocurrent on/off ratio systematically increases with the excitation power, which is attributed to the increased density of the free e–h pairs in the CrSBr due to the band-to-band transition of the photogenerated carriers. 2 , 53 Note that the measured on/off ratio is limited by the sensitivity of our current source meter, which allows the dark current to be measured only down to 1 nA. Broadband photoresponse of the devices has been studied utilizing 488, 514, 568, and 633 nm excitation sources of an Ar–Kr laser. The obtained photoresponse under a constant power density has been provided in Figure S5 . The photoresponsivity of the detector is defined as the photocurrent generated per unit incident power on the effective area of the PD, which can be expressed as , where I ph is the net photocurrent and P is the incident power on the device. 2 , 53 Figure 4 a presents the calculated photoresponsivity as a function of the incident power density at zero bias voltage. The highest value of the photoresponsivity found was ∼0.26 mA/W for 0.42 mW/cm 2 at zero bias voltage. The value of R shows a negative dependence on the incident power, which is consistent with the previously obtained results. 7 , 54 , 55 With increasing illumination intensity, the number of photocarriers (e–h pairs) increases, which induces more recombination of the photogenerated excitons due to high exciton binding energy. 56 , 57 The detectivity describes the smallest detectable signal. Assuming that shot noise is the major factor contributing to the dark current, the detectivity can be expressed as , where R is the photoresponsivity, A is the area of the incident light on the PD, e is the elementary charge, and I d is the dark current. The variation of the detectivity of the PD with the incident power density is shown in Figure 4 b. The value of D decreases with increasing incident power and the maximum value of D obtained was ∼3.4 × 10 8 Jones. The change of D with laser power can be explained by the linear dependence of D on R and a constant dark current. Note that the laser heating of the device channel has been assumed to be negligible. 2 , 58 The photocurrent gain is a dimensionless figure of merit of the device given by the number of photoexcited carriers generated per unit photon. Assuming that all incident photons are absorbed in the device active area, η can be expressed as , where h is the Planck constant, c is the speed of light in free space, and λ is the wavelength of the incident photon beam. The variation of the gain as a function of the incident power is shown in Figure 4 b. The maximum value of η was ∼6.2 × 10 –2 % at the pump fluence of 0.42 mW/cm 2 in self-powered conditions, thus demonstrating the relatively high efficiency of the device for light detection. 46 The transient photoresponse and recovery time of the devices have been found to be nearly 52 and 38 ms, respectively, which is faster than typical monolayer semiconductor transition metal dichalcogenide-based devices, where a prolonged photoresponse in planar configuration is observed. 56 , 59 Polarization-Resolved Photodetection at the Ambient Conditions CrSBr is suitable for high-sensitivity polarization detection due to its low-symmetry crystal structure, which is similar to that of black phosphorus, 60 CrPS 4 , 61 and GeSe 2 . 62 Thus, we performed further polarization-dependent photocurrent measurements on the same devices under the excitation of a linearly polarized 514 nm photon beam. A schematic illustration of the experimental setup is given in Figure 5 a. The details of the experimental setup are discussed in the Methods Section . The photocurrent was measured under the excitation of the polarized laser beam while varying the angle of the incident from 0 to 360° with respect to the crystalline axes of the single crystalline CrSBr. The step interval was 10° and no external bias was applied. Note that the excitation laser illuminated the area next to one of the Au electrodes throughout the whole measurement. We further evaluated the corresponding angle-dependent photoresponsivity, gain, and detectivity, as shown in Figure 5 b,c, respectively, which exhibit quasi-sinusoidal behavior with respect to the polarization angle of the incident photon beam. We further calculated the photocurrent anisotropic ratio r = ( I ph-max – I ph-min )/( I ph-max + I ph-min ) and the dichroic ratio d = I ph-max / I ph-min , which were found to be 0.65 and 3.43, respectively, for an incident power density of 1.4 mW/cm 2 , indicating very high sensitivity to the polarization of the incident photons even at the self-powered conditions. Polarization-Resolved PL at the Ambient Conditions We also performed the polarization-dependent PL measurement under ambient conditions. Figure 6 a shows the PL spectra recorded for the excitation of the 514 nm laser beam at different polarization angles with respect to the crystalline axes of the CrSBr. The origin of the PL peak centered at 950 nm (∼1.31 eV) can be traced to 1s excitonic recombination, as suggested previously. 63 In Figure 6 b, we showcase a polar plot of the PL intensity of the 1s exciton peak, which clearly exhibits a dumbbell-shaped anisotropic nature with a 180° variation period and intensity maxima at 90 and 270° (along the b -direction). It can be seen that the optical band-to-band transitions are allowed only along the b -direction. To understand this peculiar directional dependence of the PL, one has to consider the electronic band structure of CrSBr. 32 , 33 , 63 It is known from the theoretical calculations that the large band gap possesses highly pronounced anisotropy in the Γ– Y direction (along the b -direction of the crystallographic axis), which vanishes completely along the Γ– X direction (along the a -direction of the crystallographic axis). 14 , 15 , 36 This implies that interband transitions are forbidden along the a -axis from the conduction band maxima and valence band minima at Γ. 32 , 33 , 63 We further performed polarization-dependent spatially resolved PL measurements with a 532 nm excitation. For this experiment, we used two nearly perpendicular CrSBr flakes exfoliated on the SiO 2 /Si substrate, as shown in the insets of Figure 6 c,d. In Figure 6 c, the incident laser polarization is aligned with the flake orientation at the top along the b -axis and hence we observe strong luminescence, while it is negligible for the flake at the bottom in a perpendicular direction. For the experiment shown in Figure 6 d, we kept the polarization of the laser fixed and rotated the sample stage by 90° (see the insets of Figure 6 c,d). For this scenario, we observed that the b -axis of the flake at the bottom was now aligned to the polarization of the incident laser, and it showed strong luminescence while the top flake did not. Polarization-Resolved Raman Spectroscopy Raman spectroscopy is a powerful tool to characterize anisotropic materials. The scattered light intensity ( Ĩ ) is dependent on the polarization of the incident light ( e i ) and the scattered light ( e s ) via the Raman tensor (Γ) for the corresponding Raman modes. According to the Placzek model, the Raman scattering intensity is given by Î ∝ | e i Γ· e s | 2 , where e i and e s are the polarization unit vectors for the incident and scattered light and Γ is the Raman tensor for the Raman active modes. 60 , 64 Hence, it is possible to extract information about the crystallographic anisotropy for any material by considering the polarization-dependent Raman response. Figure 7 shows the polarization-dependent Raman spectra of multilayer CrSBr flakes. Figure 7 a,b shows stack plots of the polarization-dependent Raman spectra covering both a and b directions for A 1g 2 and A 1g 3 at the 245 and 343 cm –1 modes, respectively. The corresponding polar plots of the two principal Raman modes, A 1g 2 and A 1g 3 , are shown in Figure 7 c,d. The measurement was carried out by coaligning the linearly polarized excitation light with the detector polarization. We observe that the modes A 1g 1 (not shown here) and A 1g 3 for our multilayer CrSBr crystal show anisotropic vibrational response (dumbbell-shaped) with a 180° variation period and intensity maxima at 90° and 270° along crystallographic orientation b (direction b ). However, the A 1g 2 mode is an exception, as it is observed to be rotated by 90° as the scattered intensity shows maxima corresponding to 0 and 180° along crystallographic orientation a (direction a ) with a 180° variation period. The anisotropic polarization response in this case for the A 1g 2 mode along the a -axis can be attributed to the quasi-1D-like electron–phonon interaction based on the electronic structure of CrSBr, which is strongly influenced by coupling of the high-density states of the electronic system along the symmetrical Γ– X direction. This anisotropic directional dependence of the different Raman modes is consistent with recent observations of the single crystal of CrSBr having an orthorhombic structure with D 2h symmetry. 63 The observable out-of-plane A 1g modes can be well fitted with the equations of the Raman scattering intensity obeying the relation ( s 2 sin 2 θ + c 2 cos 2 θ 2 ) for the parallel configuration. This corresponds to the experimental setup, in which the polarization direction of the scattered light is normal to the plane without the presence of the B g Raman modes. 25 Hence, the obtained PL and Raman spectra corroborate the anisotropic responses of the A 1g 2 and A 1g 3 Raman modes of the crystal. Thus, we confirm that the polarization-dependent behavior is an intrinsic property of the CrSBr. In addition, the obtained r and d values of our PD are larger than those for previously reported devices based on other 2D materials. 62 , 65 − 72 In Figure 8 a, we have presented a comparison of the reported dichroic ratio of the PD devices based on few-layer 2D materials, for which our CrSBr device attains a value of 3.4, one of the highest reported so far. The measured devices exhibit a highly stable and reproducible performance. To test the long-term stability of the device, we have also recorded photocurrent data under ambient conditions after a period of ∼5 months ( Figure 8 b). We did not observe any significant change in the device performance during this period. These results thus demonstrate the great potential of 2D CrSBr for application in self-powered linear dichromatic optoelectronic devices.
Conclusions We studied CrSBr crystals as a promising air-stable, self-powered, polarization-sensitive PD. The detector is based on a multilayer CrSBr/Au Schottky junction prepared by mechanical exfoliation and a dry transfer method. The photocurrent was measured in the absence of external bias under varying incident light power, yielding a photoresponsivity value of ∼0.26 mA/W at an incident light power of 0.42 mW/cm 2 . The photodetectivity of the device was about 3.4 × 10 8 Jones and the gain was η = 6.2 × 10 –2 % at zero bias voltage. The photoresponse of the devices was found to be position-dependent due to the different electric fields across the junction. The anisotropic ratio of the device was estimated to be ∼0.65 and the dichroic ratio was estimated to be around 3.4. The high degree of anisotropy in photodetection was caused by the puckered crystal structure of CrSBr, which caused a break of the sublattice symmetry. These results clearly demonstrate that CrSBr is a promising anisotropic 2D material for the development of high-quality polarization-dependent PDs for imaging and optical communication applications.
Recent progress in polarization-resolved photodetection based on low-symmetry 2D materials has formed the basis of cutting-edge optoelectronic devices, including quantum optical communication, 3D image processing, and sensing applications. Here, we report an optical polarization-resolving photodetector (PD) fabricated from multilayer semiconducting CrSBr single crystals with high structural anisotropy. We have demonstrated self-powered photodetection due to the formation of Schottky junctions at the Au–CrSBr interfaces, which also caused the photocurrent to display a position-sensitive and binary nature. The self-biased CrSBr PD showed a photoresponsivity of ∼0.26 mA/W with a detectivity of 3.4 × 10 8 Jones at 514 nm excitation of fluency (0.42 mW/cm 2 ) under ambient conditions. The optical polarization-induced photoresponse exhibits a large dichroic ratio of 3.4, while the polarization is set along the a- and the b -axes of single-crystalline CrSBr. The PD also showed excellent stability, retaining >95% of the initial photoresponsivity in ambient conditions for more than five months without encapsulation. Thus, we demonstrate CrSBr as a fascinating material for ultralow-powered optical polarization-resolving optoelectronic devices for cutting-edge technology.
Supporting Information Available The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsami.3c13552 . Layered-dependent Raman and PL spectra of CrSBr, I – V characteristic of CrSBr with and without illumination, bias-dependent photocurrent, position-sensitive photocurrent at 0 V, and wavelength-dependent photocurrent ( PDF ) Supplementary Material The authors declare no competing financial interest. Acknowledgments J.P., S.S., M.K.T., G. H., and M.K. acknowledge support from the Czech Science Foundation (project no. 20-08633X). M.V. acknowledges the Czech Academy of Sciences Lumina Quaeruntur (fellowship no. LQ200402201). Z.S. was supported by project LUAUS23049 from the Ministry of Education, Youth, and Sports (MEYS) and used resources of the large infrastructure from project reg. No. CZ.02.1.01/0.0/0.0/15_003/0000444 financed by the EFRR.
CC BY
no
2024-01-16 23:45:29
ACS Appl Mater Interfaces. 2023 Dec 26; 16(1):1033-1043
oa_package/34/fd/PMC10788859.tar.gz